Visual Studio 2010 database edition schema compare where target is dbproj - sql

I'm using visual studio 2010 database edition and running a schema compare against a SQL database instance to sync up new objects on the database which arent yet in my project (dbproj). My solution contains many projects some of which reference each other via database references. When I write updated from my schema compare to my target project any references to objects in my other projects get added as hard references and do not use the sqlcmd variables that get set up when adding a database references.
This causes my solution to have warnings and errors until I manually swap out the hard references for the sqlcmd variables. I've found I can re-make the database references which has an option to pick up all the hard references through the project but this is still cumbersome. Is there a way to use the existing set of sqlCmd variables defined for my project such that updated are written to my project with the variable references used?

This is just a thought but we never make direct changes in the database and use our project to build against the db to produce a deploy script. By using the project as the master of all knowledge we feel this works very well for us.
We branch out to create our dev, test environments then merge the project when we deploy.
As for references, we found referencing the dbmeta file instead of the actual database was also useful as it makes all the projects more transferable.

My solution isn't exactly what you want to do, but it might help.. When I run into similar issues, I usually create a (temp) DB for the schema compare & use it instead of my project..

Related

Reference to system databases becomes duplicated in SSDT database project

In a huge SSDT database solution with lots of projects and references, I'm adding a reference from my project to system databases(master, msdb), it works well, and the build is successful.
And after some time I start receiving errors about incorrect reference. I go to the references section and see this: https://pasteboard.co/JqzDSDh.png
I tried removing the second reference and the errors were gone but then this issue comes back and I see two identical references again.
Thank you!
Most probably something is wrong with your project.sqlproj file. Try to search master.dacpac keyword there and make sure that there is no multiple entries. Make sure that dacpac path is not fully hardcoded, but uses $(DacPacRootPath) variable there.
This is an example how the reference should look like (make sure that you have the right SQL version defined in the path. Mine one is 140 here).
<ArtifactReference Include="$(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\140\SqlSchemas\master.dacpac">
<HintPath>$(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\140\SqlSchemas\master.dacpac</HintPath>
<SuppressMissingDependenciesErrors>False</SuppressMissingDependenciesErrors>
<DatabaseVariableLiteralValue>master</DatabaseVariableLiteralValue>
</ArtifactReference>
If that wouldn't help, try to run "Clean Solution", then delete all *.jfm files and *.dbmdl files, bin and obj folders and re-build the project.

Build Multiple Projects in a Single Solution for VSTS

I'm in this case, I have a single solution built on Visual Studio 2013. It contains more than 10 projects that reference each others, I need only to build and release 3 projects of them on Azure via Visual Studio Team Services so the question is what is the best approach to do this
Thank you
If they all reference each other than you may need to build all of them. Dependencies will be resolved at build time.
You can reference individual Project files in place of a Solution. You will then need to maintain order yourself.
Just use the same build step for building a Solution, but fill out a Project (.proj) file instead. Control the order by having multiple build steps.
Please read my Suggestions below-
- Copy all the references dll's in one shared folder through Post build event.
- Create new solution according to your deployment needs and take all references from the shared folder in all projects.
- Deploy the solution you want
If the references are project references, you just need to specify the solution file (.sln) to build, otherwise, you need to specify project dependence (Right click your solution=> Project Dependence=>Select a project=>Check the projects’ options that dependence to)

Source control in SSIS and Concurrent work on dtsx file

I am working on building a new SSIS project from scratch. I want to work with couple of my teammates. I was hoping to get a suggestion on how we can have some have some source control, so that few of us can work concurrently on the same SSIS project (same dtsx file, building new packages.)
Version:
SQL Server Integration Service v11
Microsoft Visual Studio 2010
It is my experience that there are two opportunities for any source control system and SSIS projects to get out of whack: adding new items to the project and concurrent changes to an existing package.
Adding new items
An SSIS project has the .dtproj extension. Inside there, it's "just" XML defining what all belongs to the project. At least for 2005/2008 and 2012+ on the package deployment model. The 2012+ project deployment model carries a good bit more information about the state of the packages in the project.
When you add new packages (or project level connection managers or .biml files) the internal structure of the .dtproj file is going to change. Diff tools generally don't handle merging XML well. Or at all really. So, to prevent the need for merging the project definition, you need to find a strategy that works for you team.
I've seen two approaches work well. The first is to upfront define all the packages you think you'll need. DimFoo, DimDate, DimFoo, DimBar, FactBlee. Check that project and the associated empty packages in and everyone works on what is out there. When the initial cut of packages is complete, then you'll ensure everyone is sync'ed up and then add more empty packages to the project. The idea here is that there is one person, usually the lead, who is responsible for changing the "master" project definition and everyone consumes from their change.
The other approach requires communication between team members. If you discover a package needs to be added, communicate with your mates "I need to add a new package - has anyone modified the project?" The answer should be No. Once you've notified that a change to the project definition is coming, make it and immediately commit it. The idea here is that people commit and sync/check in whatever terminology with great frequency. If you as a developer don't keep your local repository up to date, you're going to be in for a bad time.
Concurrent edits
Don't. Really, that's about it. The general problem with concurrent changes to an SSIS package is that in addition to the XML diff issue above, SSIS also includes layout data alongside tasks so I can invert the layout and make things flow from bottom to top or right to left and there's no material change to SSIS package but as Siyual notes "Merging changes in SSIS is nightmare fuel"
If you find your packages are so large and that developers need to make concurrent edits, I would propose that you are doing too much in there. Decompose your packages into smaller, more tightly focused units of work and then control their execution through a parent package. That would allow a better level of granularity to your development and debugging process in addition to avoiding the concurrent edit issue.
A dtsx file is basically just an xml file. Compare it to a bunch of people trying to write the same book. The solution I suggest is to use Team Foundation Server as a source control. That way everyone can check in and out and merge packages. If you really dont have that option try to split your ETL process in logical parts and at the end create a master package that calls each sub packages in the right order.
An example: Let's say you need to import stock data from one source, branches and other company information from an internal server and sale amounts from different external sources. After u have all information gathered, you want to connect those and run some analyses.
You first design the target database entities that you need and the relations. One of your member creates a package that does all the import to staging tables. Another guy maybe handles external sources and parallelizes / optimizes the loading. You would build a package that in merges your staging and production tables, maybe historicizing and so on.
At the end you have a master package that calls each of the mentioned packages and maybe some additional logging or such.
In our multi-developer operation, we follow this rough plan:
Each dev has their own branch, separate from master branch
Once a week, devs push all their changes to remote
One of us pulls all changes, and merges all branches into master, manually resolving .dtproj conflicts as we go
Merge master in all dev branches - now all branches agree
Test in VS
Push all branches to remote, other devs can now pull and keep working
It's not a perfect solution, but it helps quarantine the amount of merge pain we have to experience.
We have large ssis solutions with 20+ packages in one solution, with TFS Git. One project required adding a bunch of new packages to the existing solution. We thought we were smart and knew to assign only one person to work on each new package, 2 people working on the same package would be suicide. Wasn't good enough. When 2 people tried add a different named, new, package at the same time, each showed dtproj as a file that had changed/needed to be checked in and suddenly I found myself looking at the xml for dtproj and trying to figure out which lines to keep (Microsoft should never ask end users to manually edit their internal files, which only they wrote and understand). Billinkc's solutions here are very good and the problem is very real. You may think that Microsoft is the great Wise One, and that your team can always add new packages to an existing solution without conflicts, but you'd be wrong. It also doesn't work to put dtproj in .gitignore. If you do that, you won't see other peoples new packages (actually the .dtsx file will come down in git, but you won't see that package in Solution Explorer because dtproj is what feeds Solution Explorer). This is a current problem (2021) and we are using Visual Studio 2017 Enterprise with SSDT.
To explain this problem to people, git obviously can handle a group of independent, individual files in a directory (like say .bat files) and can add, change, and delete those files easily. The problem comes in when you have a file that is naming, describing, and counting all the files in a directory (what dtproj does). When you have a file like dtproj you are creating a conflict on dtproj itself, when 2 people try to a add a new package at the same time. Your dtproj file has a line that shows the package you added, and my dtproj file shows the package I added, and tfs/git sees that as a Conflict.
Some are suggesting ways to deal with this if you have to add a lot of new packages, my idea is a little different. For the people who have to add new packages, don't work in the primary solution where this problem is, work somewhere else. Probably best to work in the "Projects" directory you get when you install Visual Studio, outside of TFS/Git. Obviously follow all the standards, Variable naming, and Package Configuration conventions for the target Solution. Then when the new packages are ready, give the .dtsx files to your Solution Gatekeeper for them to check in. Only the Gatekeeper can check in new packages using Add From Existing, avoiding conflicts. Once the package is checked in, developers can work on them in the main Solution.

SSDT - Build Deployment Script without dacpac

I've got a question about building a deployment script using SSDT.
Could anyone tell me if it's possible to build a deployment script using SQLPackage.exe where the source file is NOT a dacpac file, but uses the .sql files instead?
To give some background, I've created a project in Visual Studio 2012 for my database schema. This works great, and SSDT builds the folder structure without a problem (functions, stored procedures etc which contain all the .sql files).
Here's the problem - the database in question is from a legacy system, and is riddled with errors. Most of these errors we don't care about anymore and it's not practical or safe to fix them all, so for years we've basically ignored them. However it means we can't build the project and therefore can't generate the dacpac file. Now this doesn't prevent us from doing the schema compare and syncing the database with the file system (a local mercurial repository). However it does seemingly prevent us from building a deployment script.
What I'm looking for is a way of building the deployment script using SQLPackage.exe without having to generate the dacpac file. I need to use the .sql files in the file system instead. Visual Studio will produce a script of the differences without building the dacpac, so this makes me think it must be possible to do it using SQLPackage.exe using one of the parameters.
Here's an example of SQLPackage.exe which I'd like to adapt to use the .sql files instead of the dacpac:
sqlpackage.exe /Action:Script /SourceFile:"E:\SourceControl\Project\Database
\test_SSDTProject\bin\Debug\test_SSDTProject.dacpac" /TargetConnectionString:"Data
Source=local;Initial Catalog=TestDB;User ID=abc;Password=abc" /OutputPath:"C:
\temp\hbupdate.sql" /OverwriteFiles:true /p:IgnoreExtendedProperties=True
/p:IgnorePermissions=True /p:IgnoreRoleMembership=True /p:DropObjectsNotInSource=True
This works fine because it uses the dacpac file. However I need to point it at the folder structure where the .sql files are instead.
Any help would be great.
As has been suggested in comments, I think that biting the bullet and fixing the errors is the way ahead. You say
it's not practical or safe to fix them all,
but I think you should give this a bit more thought. I have recently been in a similar situation to you, and the key to emerging from it is to realise that the operational risk associated with dropping procedures and functions that will throw an exception as soon as they are called is zero.
Note that this does not apply if the reason these objects won't build is that they contain cross-database or cross-server references that are present in production but not in your project; this is a separate problem altogether, but also a solvable one.
Nor am I in favour of "exclude from build" as an alternative to "delete"; a while ago I saw a project where this technique had been deployed extensively; it makes it harder to see what does what from the source files and I am now of the opinion that "Build Action=None" is simply "commenting out the bits that don't work" for the Snapchat generation.
The key to all of this, of course, is source control. This addresses the residual risk that one day you might indeed want to implement a working version of one of your currently non-working procedures, using the non-working code as a starting point. It also obviates the need to keep stuff hanging around in the solution using Build Action=None, as one can simply summon an earlier revision of the code that contained the offending objects.
If my experience is any guide, 60 build errors is nothing; these could easily be caused by references to three or four objects that no longer exists, and can be consigned to the dustbin of source control with some enthusiastic use of the "Delete" key.
Do you have a copy of SQL Compare at your disposal? If not, it might be worth downloading the trial to see if it will work in your scenario.
Here are the available switches:
http://documentation.red-gate.com/display/SC10/Switches+used+in+the+command+line
At the very least you'll need to specify the following:
/scripts1:
/server2:
/database2:
/ScriptFile:

Visual studio 2010 database project and code generation

I'm trying to use the database project in VS2010, but my setup is a bit different from standard and I can't find an easy way to get it to work.
I have a "model" project which contains some xml model definitions of a simple information for an ETL process. As well as the schema for the supplied information, it contains other metadata, for example details of which columns need to be matched up with other tables, what to do in case of a non-match, etc, etc.
Using T4 templates, I then generate sql scripts, views and tables to manage the whole thing - one sql file per xml file. There are around 30 xml definitions, but the number of parameters is small and the pattern very repetitive, so it works well.
I want to dump these sql files into the database project, in order to get it to generate the deploy scripts and identify database changes for me. I can arrange for the files to be combined into one script. Is there a way to get VS to analyse the scripts automatically, or do I need to import them every time?
EDIT: I originally asked about getting VS not to split my scripts up into individual components. I found a solution to this: copy the existing script into the project, and - crucially - change the "build action" for the script to "build" (for some reason, default is "not in build"). VS will then add the item into the model and it will be part of the generated scripts - yay! However, still no way to reference scripts in other projects...
I've read the MS how-to for database projects, but didn't find anything in it that seemed relevant
Thanks for your help,
You can do this with T4 Toolbox. Here is how: http://www.olegsych.com/2010/03/t4-tutorial-integrating-generated-files-in-visual-studio-projects/. Specifically, you want to take advantage of the Template.Output.File and Template.Output.Project properties.
Oleg