Should you store your SQL Stored Procedures in Source Control? - sql

When developing an application with lots of stored procedures, should you store them in some sort of source versioning system (such as source-safe, TFS, SVN)? If so, why? And is there a convenient front end way to do this with SQL Server Management Studio?

Yes. All code should be stored in source control.
Simply put, code is code and mistakes happen. It's nice to be able to go back and see what changed over time and be able to go back to those changes.
We have to add it manually to a source control system, but you can create addons for the Sql Server the Management System. I haven't ever created one to automatically add it to source control, but I suppose you could. Also, all the code is stored in sql tables, so you could in theory create a process or something to go through the table(s) and retrieve all the code and commit it automatically.
Update: I would always write extra code to check and see if the code exists and if it doesn't create a filler procedure and then the actual script do and alter procedure.
IF NOT EXISTS (SELECT * FROM dbo.sysobjects WHERE
id = OBJECT_ID(N'[dbo].[SomeStoredProcedure]') AND
OBJECTPROPERTY(id,N'IsProcedure') = 1)
EXEC sp_executesql N'CREATE PROCEDURE [dbo].[SomeStoredProcedure] AS
SELECT ''SPROC Template'''
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE SomeStoredProcedure
Doing a drop and recreate will remove all the user permissions you have setup for it.

ABSOLUTELY POSITIVELY WITHOUT QUESTION NO EXCEPTIONS IN ALL PERPETUITY THROUGHOUT THE UNIVERSE YES!

Get your database under version control. Check the series of posts by Scott Allen.
When it comes to version control, the database is often a second or even third-class citizen. From what I've seen, teams that would never think of writing code without version control in a million years-- and rightly so-- can somehow be completely oblivious to the need for version control around the critical databases their applications rely on. I don't know how you can call yourself a software engineer and maintain a straight face when your database isn't under exactly the same rigorous level of source control as the rest of your code. Don't let this happen to you. Get your database under version control.

I recommend that you do store them. You never know when you'll need to rollback, or dig into logic you may have removed..
Here's a good way to easily grab your Stored Procs into files that you can throw into whatever source control you desire..
Stored Procedures to .sql files

Storing stored procedures is a great idea. Its a pain though. Just how do you get all that stuff into subversion? You can manually do it, but then its tedious and you end up not doing it at all.
I use a tool from the subsonic project.
sonic.exe version /server servername /db databasename /out outputdirectory
This command saves everything to 2 text files. One contains database schema, stored procs, user accounts, constraints, and primary keys. The other one contains the data.
Now that you have these two files you can use subversion(cvs,source safe) to move it into source control.
More info for using The Command Line Tool (SubCommander)

Most definitely yes. Then the question becomes how you store them in source control. Do you drop and recreate the stored procedure or just alter, do you add permissions at the end of the script or in a separate script. There was a post on Coding Horror a while back about the topic that I found interesting. Is Your Database Under Version Control?

Sure you should.
In MS SQL 2008, you can do it right from Management Studio.

SQL is code. All code belongs under source code control.
That is all.

Absolutely.
Positively.
A set of SPs is an interface, that is likely to be modified more frequently than structural changes.
And because SPs contain business logic, changes should be stored in version control to track the modifications and adjustments to the logic.
Storing these in version control is a symptom of organizational maturity at a coding level, and is a best practice.

Most definitely.

You should.
To my knowledge, no such tool exists to automate this process. At least, five years ago, when I was considering building one, there didn't seem to be any competition.

We store our procs in Subversion, all your SQL Code including DDL should be in some kind of source control repository

SPs and table schemas for that matter are all assets that should be under version control. In a perfect world the DB would be built from scripts, including the test data, as part of your CI process. Even if that's not the case, having a DB/developer is a good model to follow. In that way new ideas can be tried out in a local sandbox without impacting everyone, once the change is tested it can be checked in.
Management Studio can be linked to source control, although I don't have experience of doing this. We've always tracked our SP/schema as files. Management studio can automatically generate change scripts, which are very useful, as table drop/recreate can be too heavy handed for any table that has data.

SQL procs also surely need the same security/benefits of version control as the rest of the code in the project.

As others have said, yes they should be.
I don't know of an easy way to do this with SQL Server Management Studio, but if you also use Visual Studio, database projects are a great way to manage this.

There are methods in SMO to generate scripts if you prefer to code your own scripting tool.
http://www.sqlteam.com/article/scripting-database-objects-using-smo-updated

If you're not using asset management alongside source control, then I say throw everything in source control. Images, word documents, the whole shebang. Can't lose it, can always reverse any changes to it and if any machine goes down - nothing is lost.

Related

Possible to see what stored proc procedure was previously before I changed it?

I replaced a proc with an updated version from another db, however the proc had some new changes itself that I overwrote in the process.
Not a huge problem as it's a dev db and I can restore from a backup... except that I will have to go track down where the backup is and restore it and what a pain.
I don't suppose there is any helpful system table in SQL that might show me what the definition was before I changed it? Guessing not, but figured it doesn't hurt to ask.
In short the only way to answer is No.
Sql Server does not track changes to your procedures or other objects, it is not a version control system - unfortunately that's a separate process you (no doubt) have in place. Numerous solutions exist for this that integrate with various repositories such as SVN or Git, such as Redgate Sql Source Control
A little known feature allows you to append a number after the name of a procedure to create your own versions; it's indicated as deprecated in the official documentation but still works in SQL 2019.

Management Studio Backup SPs / UDFs

Just wanted to know if there was an easy way to backup Stored Procedures and / or User Defined Functions ?
As a developer, one would usually want to retain existing versions of various database objects on production ( i.e. objects like tables / views / triggers / SPs / UDFs / anything MS manages ) so as to be able to revert to the most recent state of the database in case the situation for a rollback were to arise.
We know a backup of a db would fit the bill, but would be an overkill if the change was simply 1 SP.
At present, the process is manual and therefore time-consuming and also prone to human error, for a mundane task / should really have been automated.
Am asking if it were somehow possible ( or at least if it is in MS's dev pipeline ) to set a server state so it knows to "backup" anything that is altered. Therefore, every db object would then have an "archive" or "older versions" folder that one could use to browse the object's X most recent versions.
I don't know if you were aware of it, but you can use source control with SSMS.
Better yet, see Working with Database Projects. Database Projects in Visual Studio 2010 bring database developers many of the features that "code" developers have had since forever, including source control and automated deployment.
In Object explorer go to the desired Database. Right mouse click to open the menu. Select option Tasks>>generate scripts. The wizard will walk you through the process. It is sometimes tricky, so play around with it. Good luck.
I wrote a command-line utility called SMOscript which scripts all database object definitions to file.
One solution that I've seen employed is that you rename the existing object with sp_rename and then deploy your new one. so, if you have sp_foobar in your database and you want to deploy a new version of it, you'd rename the existing one to sp_foobar_20111017012345 and then deploy your new one. If you need to revert to the old version (or any previous version), you'd do a select name from sys.objects where name like 'sp_foobar%', find the right one, drop sp_foobar, and use sp_rename to rename the appropriate one back to sp_foobar.

SQL SERVER Project

My Application Database Without Project and without Source safe, i planned to make my DB to be as project and add it to TFS, but I have no idea how to script the stored procedures, Triggers, Views, Functions, and what is the best practice to Make Update Script for All My stored procedures, Triggers, Views, and Functions to My customers DB.
The best procedure (IMHO) is to manually maintain a strict version of your schemas. Then when you need to make changes you write a delta script to move from one version to the next. I suggest you write the DDL scripts by hand -- you can make them concise and comment them.
You can use a tool like Visual Studio Team System for database architects, take a look at Running static code analysis on SQL Server database with Visual Studio Team System for database architects it will show you how to import the data, disregard the static code analysis that comes later it does not apply to your question
I've found a good way to get SQL scripts into SCM from an existing database is to use SMSS's "export all to script" option or whatever it's called, can't remember now.
Then every other change you add the change script into your SCM with a different version number in the file name.
Every release (or set cycle depending on your development/release methodology) you apply all change scripts, then re-script the entire database, tag it, and start again.
The best way to do it - save the database in TFS as set of database creation script, i.e. MyTable table should be added to TFS as MyTable.sql file (CREATE TABLE...) etc. We are using SQL Examiner to do this - see the following article: How to keep your database under version control
We are working with SVN and I never tested SQL Examiner with TFS, but I know that the tool supports TFS.

Creating a CHANGE script in Management Studio?

I was wondering if there is a way to automatically append to a script file all the changes I am making to my columns, tables, relationships etc...
The thing is I am doing a lot of different changes on a TEST db and the idea will be to apply this change script when I move the test db to production... hence keeping production data but applying all schema and object changes.
Is there an easy way to do this? Can it also migrate database diagram changes?
I have seen how you can create a change script each time I do a change but this means I have to copy and paste into a master file. Actually pretty easy!
I was just wondering if I was missing something?
Do not make changes to the test server using the UI. Write scripts and keep them under source control. You can test your scripts starting from backups of the live data and you can tune yoru scripts untill they achieve the desired result. Then you can check in the scripts for reference and later apply them on the live server. See this article Version Control and Your Database.
BTW, check out the SSMS toolpack, I think it may do what you want (I'm not sure). My advice stand none the less: version your schema, use explicitly created/saved scripts, use source control.
There's no way to directly generate a "delta" script in SSMS.
However, if every time you publish changes, you script out the entire database, including data, to SQL using the SQL Server Database Publishing Wizard you should be able to extract diffs between the versions and get your deltas that way.
If money is no object, you can purchase Visual Studio Team System Database Architect edition and use its fantastic database comparison tools to generate and version control exactly the diffs you want.
Try using TableDiff , that came with SQL Server 2005.
SQL Server 2005 TableDiff Utility
tablediff Utility
We have the process where when a developer gets done with a change, they then script it out and check it into Subversion. In Subversion we have a folder for Tables, Stored Procs, Data, etc. They script it out so it is repeatable (i.e. don’t insert the new data if it is already there.) This is important to do anyway so you keep the history of changes for a given object in the database.
In the past, we would just enter each of the files that we wanted scripted out into a text file (i.e. FileListV102.txt). When we were ready to make a release we would do “get latest” on all of the files (from VSS back then.) We then had a simple utility that would read the “file list” file and open each of those files in turn concatenating them into an output file. That is pretty easy to code.
We outgrew that and now we have a release management tools (which can be found here and will be on sale mid September), that takes all of the files and creates a big SQL script file out of it. It does it in the order that you would expect based on the folder names – so files found in the "Tables" folder are done before those in the "Data" folder, etc.
Either way, once you are done you have a big SQL script file that you can then apply to a fresh copy of production and that is what you test against.
I know I'm way late to the party, but I just wanted to add that there are tens of third party products out there. Some are very good, some are very cheap or free, and some are a mixture. I listed 22 here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have been using a relatively new software called Kal Admin.
It has Change Management feature and let distributing selected changes to other databases very easily. We used to do it by comparing two databases but it not satisfy our need for change tracking.
BTW Kal Admin has Metadata and data compare capabilities as well.

Is there a version control system for database structure changes?

I often run into the following problem.
I work on some changes to a project that require new tables or columns in the database. I make the database modifications and continue my work. Usually, I remember to write down the changes so that they can be replicated on the live system. However, I don't always remember what I've changed and I don't always remember to write it down.
So, I make a push to the live system and get a big, obvious error that there is no NewColumnX, ugh.
Regardless of the fact that this may not be the best practice for this situation, is there a version control system for databases? I don't care about the specific database technology. I just want to know if one exists. If it happens to work with MS SQL Server, then great.
In Ruby on Rails, there's a concept of a migration -- a quick script to change the database.
You generate a migration file, which has rules to increase the db version (such as adding a column) and rules to downgrade the version (such as removing a column). Each migration is numbered, and a table keeps track of your current db version.
To migrate up, you run a command called "db:migrate" which looks at your version and applies the needed scripts. You can migrate down in a similar way.
The migration scripts themselves are kept in a version control system -- whenever you change the database you check in a new script, and any developer can apply it to bring their local db to the latest version.
I'm a bit old-school, in that I use source files for creating the database. There are actually 2 files - project-database.sql and project-updates.sql - the first for the schema and persistant data, and the second for modifications. Of course, both are under source control.
When the database changes, I first update the main schema in project-database.sql, then copy the relevant info to the project-updates.sql, for instance ALTER TABLE statements.
I can then apply the updates to the development database, test, iterate until done well.
Then, check in files, test again, and apply to production.
Also, I usually have a table in the db - Config - such as:
SQL
CREATE TABLE Config
(
cfg_tag VARCHAR(50),
cfg_value VARCHAR(100)
);
INSERT INTO Config(cfg_tag, cfg_value) VALUES
( 'db_version', '$Revision: $'),
( 'db_revision', '$Revision: $');
Then, I add the following to the update section:
UPDATE Config SET cfg_value='$Revision: $' WHERE cfg_tag='db_revision';
The db_version only gets changed when the database is recreated, and the db_revision gives me an indication how far the db is off the baseline.
I could keep the updates in their own separate files, but I chose to mash them all together and use cut&paste to extract relevant sections. A bit more housekeeping is in order, i.e., remove ':' from $Revision 1.1 $ to freeze them.
MyBatis (formerly iBatis) has a schema migration, tool for use on the command line. It is written in java though can be used with any project.
To achieve a good database change management practice, we need to identify a few key goals.
Thus, the MyBatis Schema Migration System (or MyBatis Migrations for short) seeks to:
Work with any database, new or existing
Leverage the source control system (e.g. Subversion)
Enable concurrent developers or teams to work independently
Allow conflicts very visible and easily manageable
Allow for forward and backward migration (evolve, devolve respectively)
Make the current status of the database easily accessible and comprehensible
Enable migrations despite access privileges or bureaucracy
Work with any methodology
Encourages good, consistent practices
Redgate has a product called SQL Source Control. It integrates with TFS, SVN, SourceGear Vault, Vault Pro, Mercurial, Perforce, and Git.
I highly recommend SQL delta. I just use it to generate the diff scripts when i'm done coding my feature and check those scripts into my source control tool (Mercurial :))
They have both an SQL server & Oracle version.
I wonder that no one mentioned the open source tool liquibase which is Java based and should work for nearly every database which supports jdbc. Compared to rails it uses xml instead ruby to perform the schema changes. Although I dislike xml for domain specific languages the very cool advantage of xml is that liquibase knows how to roll back certain operations like
<createTable tableName="USER">
<column name="firstname" type="varchar(255)"/>
</createTable>
So you don't need to handle this of your own
Pure sql statements or data imports are also supported.
Most database engines should support dumping your database into a file. I know MySQL does, anyway. This will just be a text file, so you could submit that to Subversion, or whatever you use. It'd be easy to run a diff on the files too.
If you're using SQL Server it would be hard to beat Data Dude (aka the Database Edition of Visual Studio). Once you get the hang of it, doing a schema compare between your source controlled version of the database and the version in production is a breeze. And with a click you can generate your diff DDL.
There's an instructional video on MSDN that's very helpful.
I know about DBMS_METADATA and Toad, but if someone could come up with a Data Dude for Oracle then life would be really sweet.
Have your initial create table statements in version controller, then add alter table statements, but never edit files, just more alter files ideally named sequentially, or even as a "change set", so you can find all the changes for a particular deployment.
The hardiest part that I can see, is tracking dependencies, eg, for a particular deployment table B might need to be updated before table A.
For Oracle, I use Toad, which can dump a schema to a number of discrete files (e.g., one file per table). I have some scripts that manage this collection in Perforce, but I think it should be easily doable in just about any revision control system.
Take a look at the oracle package DBMS_METADATA.
In particular, the following methods are particularly useful:
DBMS_METADATA.GET_DDL
DBMS_METADATA.SET_TRANSFORM_PARAM
DBMS_METADATA.GET_GRANTED_DDL
Once you are familiar with how they work (pretty self explanatory) you can write a simple script to dump the results of those methods into text files that can be put under source control. Good luck!
Not sure if there is something this simple for MSSQL.
I write my db release scripts in parallel with coding, and keep the release scripts in a project specific section in SS. If I make a change to the code that requires a db change, then I update the release script at the same time.
Prior to release, I run the release script on a clean dev db (copied structure wise from production) and do my final testing on it.
I've done this off and on for years -- managing (or trying to manage) schema versions. The best approaches depend on the tools you have. If you can get the Quest Software tool "Schema Manager" you'll be in good shape. Oracle has its own, inferior tool that is also called "Schema Manager" (confusing much?) that I don't recommend.
Without an automated tool (see other comments here about Data Dude) then you'll be using scripts and DDL files directly. Pick an approach, document it, and follow it rigorously. I like having the ability to re-create the database at any given moment, so I prefer to have a full DDL export of the entire database (if I'm the DBA), or of the developer schema (if I'm in product-development mode).
PLSQL Developer, a tool from All Arround Automations, has a plugin for repositories that works OK ( but not great) with Visual Source Safe.
From the web:
The Version Control Plug-In provides a tight integration between the PL/SQL Developer IDE >>and any Version Control System that supports the Microsoft SCC Interface Specification. >>This includes most popular Version Control Systems such as Microsoft Visual SourceSafe, >>Merant PVCS and MKS Source Integrity.
http://www.allroundautomations.com/plsvcs.html
ER Studio allows you to reverse your database schema into the tool and you can then compare it to live databases.
Example: Reverse your development schema into ER Studio -- compare it to production and it will list all of the differences. It can script the changes or just push them through automatically.
Once you have a schema in ER Studio, you can either save the creation script or save it as a proprietary binary and save it in version control. If you ever want to go back to a past version of the scheme, just check it out and push it to your db platform.
There's a PHP5 "database migration framework" called Ruckusing. I haven't used it, but the examples show the idea, if you use the language to create the database as and when needed, you only have to track source files.
We've used MS Team System Database Edition with pretty good success. It integrates with TFS version control and Visual Studio more-or-less seamlessly and allows us to manages stored procs, views, etc., easily. Conflict resolution can be a pain, but version history is complete once it's done. Thereafter, migrations to QA and production are extremely simple.
It's fair to say that it's a version 1.0 product, though, and is not without a few issues.
You can use Microsoft SQL Server Data Tools in visual studio to generate scripts for database objects as part of a SQL Server Project. You can then add the scripts to source control using the source control integration that is built into visual studio. Also, SQL Server Projects allow you verify the database objects using a compiler and generate deployment scripts to update an existing database or create a new one.
In the absence of a VCS for table changes I've been logging them in a wiki. At least then I can see when and why it was changed. It's far from perfect as not everyone is doing it and we have multiple product versions in use, but better than nothing.
I'd recommend one of two approaches. First, invest in PowerDesigner from Sybase. Enterprise Edition. It allows you to design Physical datamodels, and a whole lot more. But it comes with a repository that allows you to check in your models. Each new check in can be a new version, it can compare any version to any other version and even to what is in your database at that time. It will then present a list of every difference and ask which should be migrated… and then it builds the script to do it. It’s not cheap but it’s a bargain at twice the price and it’s ROI is about 6 months.
The other idea is to turn on DDL auditing (works in Oracle). This will create a table with every change you make. If you query the changes from the timestamp you last moved your database changes to prod to right now, you’ll have an ordered list of everything you’ve done. A few where clauses to eliminate zero-sum changes like create table foo; followed by drop table foo; and you can EASILY build a mod script. Why keep the changes in a wiki, that’s double the work. Let the database track them for you.
Schema Compare for Oracle is a tool specifically designed to migrate changes from our Oracle database to another. Please visit the URL below for the download link, where you will be able to use the software for a fully functional trial.
http://www.red-gate.com/Products/schema_compare_for_oracle/index.htm
Two book recommendations: "Refactoring Databases" by Ambler and Sadalage and "Agile Database Techniques" by Ambler.
Someone mentioned Rails Migrations. I think they work great, even outside of Rails applications. I used them on an ASP application with SQL Server which we were in the process of moving to Rails. You check the migration scripts themselves into the VCS.
Here's a post by Pragmatic Dave Thomas on the subject.