FULL sql version-control using Team Foundation Server - sql

Team Foundation Server version-control of Web Applications
I feel let down :( ...
Checking-in and out .mdf files, branching and merge all work well in TFVC, however there is no data conflict resolution (conflict resolution for code is great!), only a choice is offered between whole source or target files.
I am not suggesting that data-tables are displayed side-by-side, because scripts in DAC/ 'database projects' can be compared for changes in table structure and data differences compared by stored procedures.
Some method must be commonly used, but apparently that MS expect the code and data versioning to be managed iteratively/ separately? (I would like to avoid trialling additional proprietary software like Red-Gate Source-Control.)
FULL version control is required as new feature branches will change the DB, but core/testing data needs to be retained.
So PLEASE!! Help me with pointers to straight-forward FULL version-control practices for web applications that include sql versioning with intelligent merge and roll-back capabilities.
Many thanks!

You could try Red Gate Deployment Manager to manage your deployments, which also comes with a free community edition. However, although not strictly mandatory we would recommend that you do this in conjunction with SQL Source Control, which would allow you to specify static data tables to put in version control. Although this is third party software, the database objects are saved as plain text .sql files, and not a proprietary format.

Related

Compare code between two stored procedures in two different databases in SQL Server

There are two stored procedures which has same name and same logic. But in different databases. Only some of the content names are different (example - Synonym Name, Table Name, schema). Other code is the same.
Is there any way to compare code between above two stored procedures?
Personally i prefer to use a SSDT Project (SQL Server Data Tools)
It allows you to store the entire database schema in a Visual Studio project and in git/svn.
It is capable of comparing two databases (in full) or a database with the SSDT project schema. It will show you the differences and allow you to publish those differences.
Configurable and powerful.
I highly recommend it.
For many type of comparisons (folder, text, etc.) you can use Beyond Compare (they offers 30 days trial, but after it, you can simple reinstall it).
If you want something free, you can use a compare plugin for NotePad++ but is not so fancy as the first tool:
The Plugin Manager can be opened from Plugins -> Plugin Manager -> Show Plugin Manager.
I am using the file comparison tool 'Araxis Merge' to compare files in my project.
Please check the link to download the tool - http://www.araxis.com/merge/index.en
You can download the 30 days free trial version also.
Even though the source code for stored procedures can readily be retrieved from the database, they should really reside in a VCS for any database that is (or will be) productionised.
Historically, this has been something of a manual process but later versions of Visual Studio include the SQL Server Database Project type which makes development and deployment much easier.
The question then becomes the far easier one of how to compare files within the VCS which is typically trivial as this sort of thing is generally provided out of the box.
Aquastudio does it pretty well.

How to keep 2 Database Schemas consistent without effecting the data at all?

I have two server machines (One for development, other for Clients) with SQL Server 2008 installations. Whenever a developer makes changes to tables/views/stored procedures in the Development Server, it needs to reflect the Client Server as well.
Currently, I am manually handling all changes like new columns in Tables, changes in Stored procedures etc. Can DB scripts or replication automate the entire procedure for me? Or is there some better solution to keep database schemas consistent.
Help will be highly appreciated.
Thanks!
I highly recommend to create an environment where all schema changes are done exclusively through SQL scripts - never "manually" in any environment. Each developer has to commit the script related to his/her bugfixed (or new features) to a version control system.
Typically you'd have one big script that creates the database from scratch and one for each version upgrade (from 1.0 to 1.1, one from 1.1 to 1.2 and so on)
If you have the man power it is also very handy to maintain one "from-scratch" script for each version. Whether you need that or not depends on how often an installation on an empty system is done.
We have very good experience with using Liquibase to maintain all this. It automatically keeps track which patches have been applied to a database and which need to be run during an upgrade. It also prevents you to run the same migration twice.
A problem that all database applications have, and a difficult one to resolve. Such a solution cannot be scheduled, as the changes made by developers need to be tested first, and you certainly don't want untested code merged with your live database. This question is of interest to me because I'm currently writing a generic solution to resolve this issue once and for all.
But in the meantime, we're using an open-source product called Open DBDiff (Google it - you can't miss it), which could do with some polishing but works well enough. You pass it your source and target databases, and it generates a script to make the target the same as the source. It does seem to have some trouble copying assemblies and user roles, but for everything, I haven't had any trouble.
I believe a human should do the deployments, after making sure the changes have been tested and properly checked into the source control. This is not something to automate fully.
Human should use the tools though. I use Visual Studio 2010 Professional, which has a powerful schema comparison tool, generates and executes deployment scripts and has source control integration.

Why isn't database version control considered as important as application version control?

I've recently started using Kiln Source Control for all my projects VB.NET code, and I don't know how I managed without it!
I've been looking for a database source control, for all my stored procedures, UDFs etc. However, I've found that there is not as much available for database version control as there is for my web files.
Why is database version control not considered as important as my web files? Surely all the programming in my database is just as important as the code in my code-behind and .aspx files?
Version controlling database objects IS important!
However, maybe it isn't considered as important because some people see a database merely as a tool that assists them? And external tools (normally) aren't version controlled.
One thing I've found hard to manage is the release process. Right now we're using red gates source control connected to svn. When it's time for release we do the same as with the rest of the code: Merge from one branch to the other. Then to deploy it we use sql compare to create a diff script between the merged revision and the actual database. Aside from some minor quirks and beginners mistakes I think this works well in a environment where there is no downtime (purposefully ;)) and which has a high speed development process (lots of releases).
You can maintain your database artifacts in your version control system for same.
Version control system is for versioning of artifacts and Artifacts can be Program code or database. We used same VC for code and database.
VCS are designed to store versions of text. They can store binaries but it is less efficient. And the DB state itself is not a text and can't even be directly stored as a binary. You can store the SQL code though.
one solution is to store a full DB dump (SQL or binary), another - to store sequences of SQL scripts that change one DB state to the next one. The second approach can be automated in some environments and is there called migrations. if you want a separate specific VCS for a DB, you can think that migration tools are such VCS.
There are also tools that can compare two DB states and produce a diff that is able to change the first state to the second.
I suppose it depends on whether you manage the database changes (like schema changes, migrations as mentioned by wRAR) as part of source code repository, in the form of sql scripts or other formats through the use of other tools, or do you consider this as database administration, and do that using traditional methods of backup/restore.
In my experience so far, although I wouldn't consider database management as any less important, it does happen on a very low frequency as compared to actual code changes. Your case is clearly different, but a combination of script files and database tools should take care of that.
Here's the reality.
Database version control -- that is to say DDL, DML, and even data for necessary reference data required for an application to have basic functionality - is as important as all other application assets under version control. Databases should never be under any special exception where it is considered acceptable for their assets (objects and necessary reference data) to not be under version control. Ever.
So why have they been? Simple. The toolchains to keep managing those assets simple haven't always been up to snuff (in the case of SQL Server, prior to Visual Studio 2008 it wasn't shipped with first-party tools from Microsoft), and the toolchains differ on a vendor-by-vendor basis. When those toolchains are deficient, unless the organization steps up to cover that deficiency, that deficiency remains. It's technical debt, and some organizations do not prioritize it due to either time or (sadly) skill, when the tools to make it easy don't exist or require integrating third-party tools into the development workflow.
The worst is trying to bring older projects under version control, since you have to bring everyone kicking and screaming with you all at once, in addition to selling that value to the business. I won't disagree that there may be more pressing immediate needs of the business, but getting database assets under version control needs to be somewhere on that list, even if it's a lower priority.
There's no excuse. I've fought more than enough project managers, data architects, and even CIOs/CTOs on this -- I've even made it a point to have detractors kicked off of every project I'm working on. It needs to be done, and if it's not there needs to be a timeline that the business will agree to in which it will be done. Those who argue against it need to be shot in the face, and survivors need to be shot again.

Managing database updates

I've been thinking of ways to improve managing changes to our database structure. I have a build server that creates nightly builds, so I was thinking we could somehow create database dumps, backups, and scripts from the test environment as part of the build process. Then when deploying an update to the client we could use a tool like DBDiff to create the database update script.
Is anybody doing something similar? Is it even a good idea? Maybe some good tips what to use to create these dumps on build server?
Rather than identifying the differences, I recommend to keep a proper script that creates a database from scratch.
We are quite satisfied with using Liquibase to manage all DB migration in our projects. It knows which "patches" have been applied and ensures that only those that are missing will be applied to the target database.
this is possible.
the differencing is the hard part. once you identify the differences, you need to construct the appropriate sql, then apply it. you can either apply it directly, or create some script that you can run after review.
when both sides change, then you need to decide if the target system should keep its change or if that should be completely removed.
remember that when the target system changes also include data, and if you remove some table or column, then your referential integrity might be completely ruined.
one more thought. you will need access to the target system in order to determine the diff. if this is a generic utility, you will need to make it an executable after the fact, not part of the build.
You will find the Visual Database Tools very useful here.
http://msdn.microsoft.com/en-us/library/y5a4ezk9.aspx
There is a schema compare built right into Visual Studio (it can also be run from the command line). There is also a database project that contains a complete set of scripts for the database and the objects that it contains. This can be checked into source control along with your source code.
You can deploy a new database based on these scripts with a context menu click.
Have a look at http://www.codeproject.com/KB/architecture/Database_CI.aspx and http://www.martinfowler.com/articles/evodb.html - there's a fair amount of thinking that's already available.
We are currently looking at the Juneau CTP release, SQL Tools for Visual Studio. It has a snapshot and schema comparison feature. Basically, it can auto-generate scripts between two schemas for you. If you use this against two versions of your database, it will give you an upgrade script.
http://msdn.microsoft.com/en-us/data/gg427686
Here at Red Gate we're close to releasing a solution which solves that precise issue using SQL Source Control and SQL Compare. We have an early access program which will allow you to try this out. Please visit the following link for sign-up details.
http://www.red-gate.com/MessageBoard/viewtopic.php?p=46951#46951

Database schemas WAY out of sync - need to get up to date without losing data

The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem:
What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added.
Else, if there are other known solutions, I'd like to hear them as well.
You could use a schema comparison tool like Red-Gate's SQL Compare. You can synchronize the changes and not lose any data. I wrote about this and many alternative tools ranging widely in price here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
The nice thing is that most tools have trial versions. So, you can try them our for 14 days (fully functional) and only buy it if it meets your expectations. I can't speak for the other tools, but I've been using RG for years and it is a very capable and reliable tool.
(Updated 2012-06-23 to help prevent link-rot.)
Red-Gate's SQL Compare as Aaron Bertrand mentions in his answer is a very good option. However, if you are not permitted to purchase something, an option is to try something like:
1) For each database, script out all the tables, constraints, indexes, views, procedures, etc.
2) run a DIFF, and go through all the differences and make sure that the small DB can accept them. If not implement any changes (including data) necessary onto the small DB so it can accept the changes.
3) create a new empty database from the schema of the large DB
4) import the data from the small DB into the nee DB.
You could also reverse engineer your database into Visual Studio as a database project. Visual Studio Team Suite Database Edition GDR R2 (I know long name) has the capability to do a schema comparison and data comparison, but the beauty of this approach is that you get all of your database into a nice database project where you can manage change and integrate with source control. This would allow you to build from a common source and deploy consistent changes.