Visual Studio Database Project - Generating test data on top of reference data - sql

I am adding continuous integration testing to an existing Visual Studio 2010 database project. Right now we have a build that deploys an 'empty' database [dbo].[MyDb] with just the reference data needed such as locales and countries. Right now this is performed using sql files containing insert statements that are run in the post deployment sql build task.
I now want to add another test deployment build that will deploy to another database on the same staging server as [dbo].[MyDb].[Test] with the same reference data but with generated test data that will have foreign keys to the reference data. Database integration tests are then run against that. Because the state needs to be restored for each test, this needs to be as fast as possible.
From what I've tried so far, to generate the test data using Visual Studio's data generation plan it seems I need to get the reference data to a form that can be read by the Databound generator so that it can generate the test data in a way that maintains referential integrity.
The possible options I can think of are:
Somehow get the data generation plan to read the reference sql files?
Change the reference sql files to csv files and change the original build to do bulk inserts
Combine the builds so that the MyDb database is always deployed first and set it as the sequential databound generator source for the test db.
Has anyone got a better approach or can point to a good guide?
I'm not an expert on build scripts so would like to take advantage of tools to do as much as possible. I want to keep things as a Visual Studio Database project but I also have a license for RedGate's SQL Tools if that would make the testing easier.

It appears that handling of reference data still isn't supported very well by database projects. This is confirmed by the comments on this post by Barclay Hill.
At the moment I've gone with the option of having a reference database and using that with a sequential databound generator. Since it doesn't change very often I just deploy it manually and have stopped short of having a whole separate project just for that as I've seen elsewhere.
Hopefully reference data handling will be added to SQL Server Data Tools at some point.

Related

Visual Studio 2013 SQL Server Project Deployment/Publish

I am looking for information regarding Visual Studio 2013 and working with SQL Server projects using VS 2013. We are currently working on a project where were're using a database that already exists and is used by an ERP application. We're creating SQL Scripts that would alter and create fields on a table on the target database.
Now, we're not looking to "publish" those scripts, but create postdeploy scripts instead, which contains all the necessary SQL scripts in the order they need to be run. Everything is working fine. When we build the project, we get a fresh copy of the PostDeploy.sql script file that we run across a target database.
At the moment, the script looks at a table, if the column that needs to be added exists, it DROPS it and then recreates it. This is fine for the testing phase, but once we go live, there will be several stages of the databases that the code needs to be tested on. The column may already exist from before and in that case, we wouldn't want to DROP that column, instead, we want to do schema and data level compare and just get over the objects that are DIFFERENT, so that the column doesn't need to be dropped, instead just "updated". I hope I am not being vague when I ask this question.
I found this video: https://www.youtube.com/watch?v=AuVpmu9CKRY and I am not sure if that is what I need to do? I would love any suggestions from you guys..
Have a wonderful day!
Well, this isn't really the best use for SSDT/DB Projects. Ideally, you'd want to pull the schema into a project and tweak that project to look the way you want. Rename columns, change types, etc. Because it sounds like this is a 3rd party app, you'd want some environment that can serve as your baseline - when you run whatever upgrade script is sent by the vendor, it goes against that environment. You'd then want to pull the appropriate changes into your project.
Once you have a project that looks the way you want, you use the publish option against your target database. In your case, I'd likely recommend generating a script. If you're in the VS environment, you can take a look at both the script and a summary of what will be changed.
For data compares, I'd really consider something like Red Gate's SQL Data Compare (pro edition if you can). You can set up a data compare against your baseline and automate pushing the data changes. You can do that through post-deploy scripts, but you'll need to hand-code the data inserts, updates, and deletes yourself.
I've blogged about SSDT before and that may give you some ideas. Jamie Thomson has also written quite a bit about Database/SQL Projects and inspired quite a bit of what I've done.
http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

How to deploy SQL script to clients

Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.

SQL in Visual Studio 2010 & LINQ

I'm working on a project which relies on the presence of a number of tables, views and stored procedures. Until now I have built these all in SQL Server Management Studio.
Now I would like to continue to work on them inside of Visual Studio. This will provide the benefit of version control (along with a number of other benefits hopefully).
I have added a new project to my solution and started working on one of the views. When I tried to build the solution it failed as the new project didn't have a server/database associated: Error 1 SQL03006: View: [dbo].[vw_Test2] has an unresolved reference to object [EV870_ACCT_MASTER].
I was able to overcome this by
-creating a dbschema dump using vsdbcmd.exe
-adding the dbschema dump as a reference to my database project
Is this the correct approach?
Now i can see the schema (tables, views, sprocs etc) in the Schema view (I had to enable display of "external elements") and the error message has gone away. Note: I had to reference like: [$(SQLDatabase)].[dbo].[EV870_ACCT_MASTER]
Now I want to know how I can work with these objects that i've scripted. I don't know how to use the new tables, views, sprocs etc (I want to use LINQ). Do i have to run the scripts first? How then if they are "CREATE OBJECT" scripts, will they run in future (presumably they'd fail as the object already exists in the database). Will my project/solution know which objects need updating and update them?
Ultimately want to take it a lot further- my aim is that the solution will be portable and a the server/database will be configurables. Then my tables, views and stored procedures will be created or amended if they don't exist or are out of date. Is this possible?
When I then start working with the views etc using LINQ I want those server/database references to remain dynamic?
I know there are quite a few questions in there but i'm hoping someone will be able to point me in the right direction- there doesn't seem to be much useful documentation online (or that i've stumbled across so far).
Thanks
Lee
Where I work (and the last place I worked) we distribute the sql scripts to create the database along with the app. In sql a version number is stored and when the app is run it checks to see if its version is newer than the number stored in the database. If so then it knows it may need to run some new sql scripts in case there were any schema changes. When this happens, we just run through all the scripts because they are written in a way that running them multiple times won't hurt anything... this way we don't have to worry about tracking which scripts are the new ones. Just check the version number and that's it.
As far as working with this stuff in Visual Studio instead of Management studio, I'm not sure why anyone would want to do that. Depending on what you use for source control you may be able to get hooks for Management Studio, but even if not that doesn't stop you from keeping your sql scripts in source control. And I wouldn't switch from working with my sql files in management studio to visual studio for the benefit of having built in source control any day.

Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in TFS 2008 and Visual Studio 2008?

hope that everybody here is OK.
We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio.
I am trying to automate build for Web Application built using C# and VB.Net.
Our scenario is such that we have a central database to which our web application connects.
Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds.
The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server.
I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two.
Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed.
If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix.
Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008.
Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application.
Also default data can't be generated by this process.
To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client.
Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases?
Thank you all.
Regards
Steve
Try to use SQL Examiner Suite for this:
http://www.sqlaccessories.com/SQL_Examiner_Suite/
The tool compares both schema and data and produces synchronization scrips (or differentials scripts). You can automate script creation with supplied command-line tool.
Rather than collating many individual change set scripts (and therefore occasionally missing objects out), why not use schema compare and data compare to create a single script from your database project using a database equivalent to your client's on the target? This should create a script tailored to their requirements.
In data compare you can exclude test data records that you don't want pushed to your client by unchecking them in the lower grid.

Creating a CHANGE script in Management Studio?

I was wondering if there is a way to automatically append to a script file all the changes I am making to my columns, tables, relationships etc...
The thing is I am doing a lot of different changes on a TEST db and the idea will be to apply this change script when I move the test db to production... hence keeping production data but applying all schema and object changes.
Is there an easy way to do this? Can it also migrate database diagram changes?
I have seen how you can create a change script each time I do a change but this means I have to copy and paste into a master file. Actually pretty easy!
I was just wondering if I was missing something?
Do not make changes to the test server using the UI. Write scripts and keep them under source control. You can test your scripts starting from backups of the live data and you can tune yoru scripts untill they achieve the desired result. Then you can check in the scripts for reference and later apply them on the live server. See this article Version Control and Your Database.
BTW, check out the SSMS toolpack, I think it may do what you want (I'm not sure). My advice stand none the less: version your schema, use explicitly created/saved scripts, use source control.
There's no way to directly generate a "delta" script in SSMS.
However, if every time you publish changes, you script out the entire database, including data, to SQL using the SQL Server Database Publishing Wizard you should be able to extract diffs between the versions and get your deltas that way.
If money is no object, you can purchase Visual Studio Team System Database Architect edition and use its fantastic database comparison tools to generate and version control exactly the diffs you want.
Try using TableDiff , that came with SQL Server 2005.
SQL Server 2005 TableDiff Utility
tablediff Utility
We have the process where when a developer gets done with a change, they then script it out and check it into Subversion. In Subversion we have a folder for Tables, Stored Procs, Data, etc. They script it out so it is repeatable (i.e. don’t insert the new data if it is already there.) This is important to do anyway so you keep the history of changes for a given object in the database.
In the past, we would just enter each of the files that we wanted scripted out into a text file (i.e. FileListV102.txt). When we were ready to make a release we would do “get latest” on all of the files (from VSS back then.) We then had a simple utility that would read the “file list” file and open each of those files in turn concatenating them into an output file. That is pretty easy to code.
We outgrew that and now we have a release management tools (which can be found here and will be on sale mid September), that takes all of the files and creates a big SQL script file out of it. It does it in the order that you would expect based on the folder names – so files found in the "Tables" folder are done before those in the "Data" folder, etc.
Either way, once you are done you have a big SQL script file that you can then apply to a fresh copy of production and that is what you test against.
I know I'm way late to the party, but I just wanted to add that there are tens of third party products out there. Some are very good, some are very cheap or free, and some are a mixture. I listed 22 here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have been using a relatively new software called Kal Admin.
It has Change Management feature and let distributing selected changes to other databases very easily. We used to do it by comparing two databases but it not satisfy our need for change tracking.
BTW Kal Admin has Metadata and data compare capabilities as well.