I have a data base that I've setup using composite SQL Data projects in SQL Server 2012. The main database has a reference to a library database and that reference is set to include in the same database. I can deploy it fine. However, when I try to do a compare, it ignores the library database. Is there some setting I need use to get it to compare to the full, composed database?
See:
http://blogs.msdn.com/b/ssdt/archive/2012/06/26/composite-projects-and-schema-compare.aspx
To deploy a composite project you must set the Include composite objects option on the project you're deploying from. ... This is available as an Advanced option in Publish and on the Debug properties tab, and as a general option in Schema Compare
Related
I am trying to implement the CI/CD for ReadyRoll. For the release portion I am using an Azure SQL Server so I have specified the server name, db name and cred there. However, I am not sure what details do I give for the build component when creating the shadow db. I thought they were the same but then I get an error saying that its trying to create a db in my azure sql server and it fails because there's already a db with that name there. This led me to think I am supplying the wrong values but I am not sure what is that I am to supply.
ReadyRoll maintains two databases:
•Target database
This is the development database or sandbox that you use for
debugging and to edit schema objects (e.g. using SSMS). When you
deploy, ReadyRoll executes your migration scripts against this
database to upgrade it. You shouldn't drop the target database from
your SQL Server instance.
•Shadow database
This is an exact copy of your database schema created automatically
from your project scripts (001.sql, 002.sql, 003.sql, etc). It's
created every time you use the ReadyRoll DbSync tool to view pending
changes or import. The shadow database is used by the SQL Compare
engine (that powers ReadyRoll) as the base from which to generate a
new migration script. It is safe to drop the database at any time.
More information: Target and shadow databases
You can specify these arguments for shadow database: ShadowServer, ShadowUserName, ShadowPassword, ShadowDatabase. (You also can just specify target database)
More information: Shadow database
The sample for MSBuild Arguments of Visual Studio Build task:
/p:TargetServer=XXX.database.windows.net /p:TargetUsername=XXX /p:TargetPassword=XXX /p:ShadowServer=XXX /p:TargetDatabase=XXX /p:GenerateSqlPackage=True /p:SkipDriftAnalysis=True /p:ShadowUserName=XXX /p:ShadowPassword=XXX /p:DBDeployOnBuild=True
I've created an SQL Server Database Project so that I can capture my database schema and add it to source control.
My problem is that the database contains Views which reference external databases. Given the business and project environment, this is an acceptable solution in the short tomedium term.
Sadly, this stops the database project from compiling, (since it don't contain the external database tables).
What are my options for getting around this error? I'm currently storing the schema in a single generated script, which is a pain to update.
Look at creating dacpac files out of the external databases and add them as database references. I did that by using the SQLPackage command line to generate the file, put the files in a "shared" folder (optional, but useful if this pattern persists with other projects), then add a database reference to the project. I recommend removing the variable for the DB name unless it can change in different environments. I blogged a bit about this here:
http://schottsql.blogspot.com/2012/10/ssdt-external-database-references.html
Now if it's a truly breaking change, I've done this through post-deploy scripts. Drop/recreate the view and reapply any permissions necessary. That's not ideal, but it can work.
In my database project, I have added a reference to a linked server. When I use this linked server in a view and try to build my database project, SSDT reports errors because it cannot understand references to any of the schemas referenced on the linked server:
[LinkedServer].[DB1].[dbo].[Table1]
The above would returns an error that SSDT cannot decipher the reference to [DB1].[dbo].[Table1]. I tried to add a reference to this database, but SSDT required either a .dacpac file (produced by another database project) or a system database on the same server as the database in my project.
How do I handle referencing an external database? There are use cases where a project needs to reference an remote database that is not an SSDT database project. In my case, I am accessing the database of another company and putting this database under version control as a SSDT project is out of the question.
Create a new SQL project for the remote database, place any objects in the project that you need to reference (doesn't have to be the whole database), and then add that project as a Database Reference to your project. You don't have to deploy the remote database, just have the definition of objects you use so they can be referenced.
The option we finally settled on was to use SSIS for importing of data. This way, transfer of remote data happened in an ETL layer. Our database did not reference any remote databases this way, which also can improve performance (eliminates transfers over the network, cross server joins etc).
I would recommend using SSIS or a similar method to ETL your data into local tables that your database project can reference (without needing an external project reference).
I am adding continuous integration testing to an existing Visual Studio 2010 database project. Right now we have a build that deploys an 'empty' database [dbo].[MyDb] with just the reference data needed such as locales and countries. Right now this is performed using sql files containing insert statements that are run in the post deployment sql build task.
I now want to add another test deployment build that will deploy to another database on the same staging server as [dbo].[MyDb].[Test] with the same reference data but with generated test data that will have foreign keys to the reference data. Database integration tests are then run against that. Because the state needs to be restored for each test, this needs to be as fast as possible.
From what I've tried so far, to generate the test data using Visual Studio's data generation plan it seems I need to get the reference data to a form that can be read by the Databound generator so that it can generate the test data in a way that maintains referential integrity.
The possible options I can think of are:
Somehow get the data generation plan to read the reference sql files?
Change the reference sql files to csv files and change the original build to do bulk inserts
Combine the builds so that the MyDb database is always deployed first and set it as the sequential databound generator source for the test db.
Has anyone got a better approach or can point to a good guide?
I'm not an expert on build scripts so would like to take advantage of tools to do as much as possible. I want to keep things as a Visual Studio Database project but I also have a license for RedGate's SQL Tools if that would make the testing easier.
It appears that handling of reference data still isn't supported very well by database projects. This is confirmed by the comments on this post by Barclay Hill.
At the moment I've gone with the option of having a reference database and using that with a sequential databound generator. Since it doesn't change very often I just deploy it manually and have stopped short of having a whole separate project just for that as I've seen elsewhere.
Hopefully reference data handling will be added to SQL Server Data Tools at some point.
I'm new to using SQL Server 2008 DB Project's in VS 2010. I found a good intro to setting them up. It's nice how they create Tables, Stored Proc's etc as objects. But is it also a limitation?
I want to use this project to manage 1 stored procedure (for learning). I do not want to import the entire database because 90% of the database is stuff we do not manage.
I created a new project without doing the import process. I then added a new stored procedure. Now I am having difficulty getting the thing to build. I'm getting various errors saying that I have unresolved references to objects.
How can I add a new stored procedure..build it and deploy it to the database? Is it possible with this kind of SQL project or do I need to drop back to the old, simple type of SQL projects that VS 2008 and below used?
Update
According to another post, support for the Database Project type is gone. Support for my situation appears to have been erased.
UPDATE 2 3/21/2012
I installed MSSCCI which allows me to use SSMS directly with TFS 2010. I no longer needed and found the setup process to be unmanageable for a large database SQL 2008 project. Especially when you only manage a small % of the DB.
You can Partition a Database Project by Using Partial Projects. This allows the database project to know the entire schema of the database, at the same time, you need not maintain the entire schema. You can work with the subset of the database that's under active development, for instance (or the subset which is your responsibility), yet the project knows the entire schema. This permits it to create change scripts at deployment time, by comparing the schema in the project with the schema in the target database.
You must import all schema objects referenced by your new stored procedure. But this can become a large task because every referenced object need all it's references too.
More trouble with linked server objects.