New User Defined Database Type Causing Database Project Build to Fail - sql

We have a database project that we publish to our local database. I have introduced a new database type and along with that a sproc that depends on it. Since the database project will not build unless the type exists, what would be the recommended course of action?
I want the database project's publish to create the type first, then build out the sproc. This will eventually make its way to Test, Stage, and Production servers, so it's important that it works locally first.

A buddy of mine found it -- I created the script as a regular script instead of finding the correct Visual Studio template "Add New Item > SQL Server > Programmability > User-defined Table Type", so the build action was "none" instead of "Build" as it needed to be.

Related

ReadyRoll server details for VSTS Build phase

I am trying to implement the CI/CD for ReadyRoll. For the release portion I am using an Azure SQL Server so I have specified the server name, db name and cred there. However, I am not sure what details do I give for the build component when creating the shadow db. I thought they were the same but then I get an error saying that its trying to create a db in my azure sql server and it fails because there's already a db with that name there. This led me to think I am supplying the wrong values but I am not sure what is that I am to supply.
ReadyRoll maintains two databases:
•Target database
This is the development database or sandbox that you use for
debugging and to edit schema objects (e.g. using SSMS). When you
deploy, ReadyRoll executes your migration scripts against this
database to upgrade it. You shouldn't drop the target database from
your SQL Server instance.
•Shadow database
This is an exact copy of your database schema created automatically
from your project scripts (001.sql, 002.sql, 003.sql, etc). It's
created every time you use the ReadyRoll DbSync tool to view pending
changes or import. The shadow database is used by the SQL Compare
engine (that powers ReadyRoll) as the base from which to generate a
new migration script. It is safe to drop the database at any time.
More information: Target and shadow databases
You can specify these arguments for shadow database: ShadowServer, ShadowUserName, ShadowPassword, ShadowDatabase. (You also can just specify target database)
More information: Shadow database
The sample for MSBuild Arguments of Visual Studio Build task:
/p:TargetServer=XXX.database.windows.net /p:TargetUsername=XXX /p:TargetPassword=XXX /p:ShadowServer=XXX /p:TargetDatabase=XXX /p:GenerateSqlPackage=True /p:SkipDriftAnalysis=True /p:ShadowUserName=XXX /p:ShadowPassword=XXX /p:DBDeployOnBuild=True

Visual Studio Database Project: Include 'If Exists' checks for all the objects

We use TFS Continuous Integration to handle our staging and deployments of code. In our current environment, we (developers) aren't allowed to manually update databases in Production. A script must be staged and then given to a DBA to run.
By default, the database project builds and outputs a database creation script that will create all the tables and stored procedures. However, it does not include checks to see if the object already exists.
For example, when it attempts to create the Customer table, I would like to have the script check if the table already exists, if it does alter the table.
Is this at all possible?
VS can create a script for just the changes. I think this approach will be better than using existence checks because it will be able to handle column changes, and overall it makes for a shorter and more targeted script.
Right-click the project and select Publish.
Click Edit and enter the connection details for your staging database.
Back on the Publish dialog, click Advanced and make sure "Always re-create database" is not checked.
Back on the Publish dialog, click Generate Script.
What this approach does is compare the objects in the database project to your staging database and generates a SQL script for just what is different. You can even save the publish settings to a file to make it easier to generate future scripts.
Keith is right you need to script the changes rather than just using the create statements.
You basically either need a copy of the production database to run a comparison against or you give the DBA's a way to run the comparison and deploy.
The way I prefer to do it is with TFS is to use SSDT in Visual Studio, I then have a custom build step as part of the .sqlproj file that builds the dacpac, uses sqlpackage.exe to compare the dacpac to the mirror of production (or dev, uat, whatever) - this then outputs a script that will take that version of the database to the same version of the code as the dacpac.
You can adjust this slightly to auto-deploy to dev, uat etc and just create the script in production but the choice of exactly what you do it up to you!
If you can't get a mirror of production or a copy of the schema of production etc, you can give the dacpac to the dbas and and either a batch file or powershell script ot drive sqlpackage.exe to create a script or just go ahead and deploy.
Exactly what works depends on the environment you are in!
Ed

How to deploy SQL script to clients

Our company is in the process of adapting TFS for source repository and project management. I am in charge of database part of the project. We are using SQL Server 2008 R2, Visual Studio 2012 and TFS Online. We have a database that is used by several of our applications. So far I have been the only one handling any change to this database. As the company is expending we are going to have multiple dev teams. So I am planning to save the database as as SSDT project to TFS.
At the moment I am maintaining my database like the following:
I have separate folders for UDFs, Stored Procedures, and Config.
Under these folders I have subfolders for each objects. For example, for stored procedures I have subfolders for each stored procedure which contains the SQL script to create the SP. The config folder contains any script similar to SSDT's post deployment script (for example, populating static data).
The SQL script contains code to drop the procedure and create it.
I have a c# app to concatenate all the SQL files into one single SQL file. Let's call it the FINAL script. When creating FINAL script I can specify version number which adds an update statement to update the version table on the database.
FINAL script is made available for customers to download and execute on the database. So the script mainly contains any add/edit to SPs, UDFs, and static data. It does not touch any existing data (data entered by user) in most cases.
As a newbie to TFS and SSDT I am not exactly sure how this can be done using SSDT/TFS or if there is better way of doing something similar. So far what I have understood about SSDT and TFS is:
I can import an existing database to SSDT project.
This will create scripts for all objects including tables.
I can easily do a publish of the database to a local server or to a server I have access to.
Things that seem confusing so far:
How do I supply clients with my latest update script? I am thinking of manually including the FINAL script to the SSDT project but there must be better way of doing it.
How do I publish the changes to a copy of the database without the loss of any user-entered data? My guess is when publishing the tables get created. I can take care of the static data but I am not sure how to handle data entered by users.
May be there is something fundamentally wrong in my understanding of this whole thing. That is why I am here... :)
You want to pull your DB into a SQL Project. Maintain all of your changes there. This tells your system what the schema of your database should be. From there, I'd generate the dacpac files (through building the project) and provide those to your clients along with having them install the SSDT tools that include SQLPackage. They can run SQLPackage to make changes to their database to handle the schema changes automatically. This will bring their database in line with your schema, no matter how far off it might be.
I'd also create a publish profile for them to use. This lets you control some of the settings.
You can choose to not drop any objects not in your project
You can choose to ignore users/permissions
You can set an option to not allow changes if there would be data loss.
You can wrap everything in a transaction so a failed update rolls back
If you give them a batch file to run, you can specify an output file or a Diff report, or have them generate their own script to do the update.
I blogged about this at http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
(or http://schottsql.blogspot.com/search/label/SSDT if that doesn't work well). That will take you through some basics of why you might want to use SQL Projects, creating them, maintaining them, and publishing the changes to an existing database.

How to automate deployment of entity model changes to database?

Currently I use Visual Studio Database Project, so I can deploy changes to database with one click and keep data in database.
Now I want to be able to create model in Entity Framework and deploy with one click.
So I got sql script to create database from Entity Framework. I can run this script to create database, but I want to keep my data in database.
Is there any way to do that ? Any tool that will do that ? Should I generate it on my own with T4 ?
I use CI so I need to be able to deploy often. I want something similar to Visual Studio Database Project deployment, but with Entity Framework generated database.
Liquibase is a database change management tool. It's implemented in Java but a command-line version is available to control your database upgrades (.NET version is under development).
If you need some modelling tool support then
Power architect can be used with liquibase.
The problems associated with managing database schema upgrades are subtle. For some background reading I would recommend:
Evolutionary Database Design
Get your database under version control
Update
Create a file called liquibase.properties to hold the database details:
url=jdbc:sqlserver://localhost:1433;databaseName=test
username=myuser
password=mypass
driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
classpath=C:\\Program Files\\Microsoft SQL Server 2005 JDBC Driver\\sqljdbc_1.2\\enu\\sqljdbc.jar
changeLogFile=database-changelog.xml
When using liquibase against an existing database you can run the following commands:
liquibase generateChangeLog
liquibase changelogSync
The first command will create an XML file called database-changelog.xml containing the extracted data model.
The second command is optional, but useful if you want to apply new changes to the current database. It marks the extracted changesets as already executed in the database.
Now that you have a starting point, you can proceed to add new changesets to the database-changelog.xml file. To apply these new changes just run the following command:
liquibase update
This is the same command that you use for brand new databases. During an update operation liquibase will compare the changesets in the XML file to the changesets already applied to the target database.
For more advanced usecases I suggest reading the liquibase documentation and the following answer may also help:
comparing databases and genrating sql script using liquibase
To be able to generate Visual Studio Database Project from Entity Framework Model You need to install Entity Designer Database Generation Power Pack.
You need to add Database Project to Your solution and then create edmx model with the same name. Then right click on edmx workspace and select Generate Database from Model and from generation menu Sync Database Project.
You can then deploy this Sql project from Visual Studio to Sql Server.

What is the best way to manage "non-SQL Server" SQL objects within Visual Studio 2010?

Visual Studio has a Database Project for Sql Server. This has a number of advantages: it hosts configuration settings, and database objects in one place. The .sql files are part of the regular .NET solutions - visible in the Solution Explorer and editable in Visual Studio. And they have a mechanism for generating a deployment script. With each individual database object in it's own file, the tracking of changes and source control is greatly simplified.
Has anyone had any success with using Database Projects with "non-SQL Server" databases? We use Sybase - which uses T-SQL and is very similar to SQL Server so I'm hopeful.
Or is there an alternative approach? I guess I could use a standard project (.csproj) and call a custom commandline application as part of the post-build to convert the .sql files into a deployment script.
Any ideas would be welcome.
Thanks
OK, I'll answer my own question.
I added all of our SQL objects to their own .sql files within a Visual Studio .dbproj project. However, minor syntactic incompatibilities between the Sybase version of RAISERROR and the Microsoft version of RAISERROR caused the validation code built into Visual Studio to get unhappy. The problem with the database project was that this actually caused a compilation error - which basically made it into a show-stopper.
So I scrapped that idea and added the .sql files to a standard .csproj project file. I then implemented some custom code that would load all of the .sql files, and aggregate them into a deployment script when invoked. I added a call to the custom code to the post build of the .csproj file so that whenever it was compiled - it would output a deployment script - which works like a dream with our build server.
In order to get some of the benefits of the .dbproj, I looked into writing a full SQL parser, but was quickly discouraged by some of the posts on SO. Instead I did some rudimmentary parsing with regex - which got me a few cool features without a lot of effort:
The code could detect dependencies between the various .sql files, and add them to the deployment script in the correct order to avoid sysdepends warnings.
Where there were no dependencies, objects were ordered based on the object type (stored procedure, function, grant statement, etc) and then by name so that the resulting script was always ordered the same - which is very important if you need to diff two versions of the script.
The deployment script can figure out some of the required permissions, so I don't need to keep track of all of the GRANT statements.
Stored procedures that are in the database but not in the script can be dropped automatically - so I don't need to keep track of what state each database is in - we just run the script and everything is in the correct state.
We have a few stored procedures that our automated tests call that shouldn't be deployed. The code can detect these and include them in a Debug build and exclude them in a Release build.
The custom code also generates a diff script that determines what changes the deployment script will make to a database and prints them out. This allows the person who is running the script to get an idea of what it will do. For example, the diff script might tell them that no changes will be made - so they don't need to run the deployment script at all - which is kind of handy if it saves them logging in at 3am to take a database offline and take backups etc.
So the end result is that all of my SQL objects are in separate files making them easy to work with in Visual Studio and manage under source control. For the first time since I started this job, I can look at the history in source control and tell what files have been changed (before this we had one enormous .sql file with absolutely everything in it).