Reference to system databases becomes duplicated in SSDT database project - sql

In a huge SSDT database solution with lots of projects and references, I'm adding a reference from my project to system databases(master, msdb), it works well, and the build is successful.
And after some time I start receiving errors about incorrect reference. I go to the references section and see this: https://pasteboard.co/JqzDSDh.png
I tried removing the second reference and the errors were gone but then this issue comes back and I see two identical references again.
Thank you!

Most probably something is wrong with your project.sqlproj file. Try to search master.dacpac keyword there and make sure that there is no multiple entries. Make sure that dacpac path is not fully hardcoded, but uses $(DacPacRootPath) variable there.
This is an example how the reference should look like (make sure that you have the right SQL version defined in the path. Mine one is 140 here).
<ArtifactReference Include="$(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\140\SqlSchemas\master.dacpac">
<HintPath>$(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\140\SqlSchemas\master.dacpac</HintPath>
<SuppressMissingDependenciesErrors>False</SuppressMissingDependenciesErrors>
<DatabaseVariableLiteralValue>master</DatabaseVariableLiteralValue>
</ArtifactReference>
If that wouldn't help, try to run "Clean Solution", then delete all *.jfm files and *.dbmdl files, bin and obj folders and re-build the project.

Related

what´s the purpose of ProjectDependencies in VS-solution-file

I read this post on the contents of a solution file, but still have no clue about the actual purpose of dependencies provided within a solution-file rather than within the project-file itself.
It seems there are two ways of having project 2 depending on project 1:
add a project-reference from p2 to p1. This will alter the csproj-file for p2 by introducing a ProjectReference to p1.csproj, but won´t change the solution, as far as I understand.
add an assembly-reference from p2 to p1. Thill will also alter the csproj-file by using a Reference to a compiled assembly (dll). However, it also adds a ProectDependency into the solution-file, which I do not understand. Why is this second entry within the solution needed in this case? Isn´t the assembly-reference provided within the csproj-file for p2 sufficient?
It's purely historical. The new project files don't really need it anymore, but the .sln file format predates msbuild and thus the solution file has some duplication.
It's used to define the build order, which becomes more important when you have ancient project types in your solution, as these won't be able to declare build order. It's also used to declare and validate build order between unrelated projects (e.g., project that don't reference each other), without the IDE having to load & parse all the projects.
Your second case is one of those cases where the solution file keeps track of build order. It then knows it needs to build P1 prior to P2. Without the solution level reference that information would be lost. It's quite clever that this is automatically detected and added, in the past you needed to manually define such build-order-dependencies.
At compile time the .sln is transformed into an msbuild file which then orchestrates the build. (see an example here). You can set an environment variable to generate yours.
<TL;DR>
The solution file is ancient and has some artefacts left over from pre msbuild. Some things just need to be there for 'reasons'.
If they were to build VS from scratch, the solution file would look very different.

My VS project keeps getting rebuilt using msbuild

Latest:
This is definitely a bug in msbuild. Other than that there cannot be any other explanation. This could only be happening on Linux or possibly on a wider range.
So i decided to just build one single project with absolutely no dependencies on others in the solution.
Looking at the captured diagnostics, I see these lines which are very promising:
Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files.
Input files: Annotations.cs;Auth.cs;AuthorizationConfig.cs;Backend.cs;Billing.cs;Code.cs;...
Output files: .obj/TheAgent.dll;.obj/TheAgent.pdb
Set Property: NoWarn=;1701;1702
15:23:27.396 1>Done building target "CoreCompile" in project "TheAgent.csproj".: (TargetId:40)
It looks like my dll and my pdb weren't built which is what I expected.
However, something must be happening before or after causing the timestamp to change (be that of this build time and not the last).
The timestamp of the dll is updated both in the intermediate object folder (.obj/) and also in the output folder.
Is there a known way of stopping msbuild right after its CoreCompile task?
Update:
I decided to search for is newer this time and found instances of these. I don't know how they have got to the solution/project files though:
Input file ".obj/Common.csproj.CoreCompileInputs.cache" is newer than output file ".obj/Common.pdb".
Further to the above, I came across this:
https://github.com/dotnet/project-system/issues/4736
Thinking that this was the issue, I upgraded to dotnet sdk version 2.2.402.
The end result is still the same :(
Original:
I need some pointers on how to troubleshoot this issue. I am using /t:build to build a solution file.
The resulting executable keeps getting refreshed each time.
First i thought the package restore was causing this. I have removed that step however it didn't make a difference.
Then I looked at this:
https://oz-code.com/blog/visual-studio-keeps-rebuilding-projects-no-good-reason/
I'm basically looking for some text in the diagnostics output which tells me if a target or a file is out of date and needs to be rebuilt. The above link talks about "project 'B' is not up to date". I don't have a not up to date in my msbuild output.
I already had two resources with CopyAlways which I changed to CopyIfNewer.
The above article also talks about circular dependencies. I am checking everything manually. And yes the references to dependent project are actually references to the project outputs (dll's /exe's). So Finding a circular dependency by just checking for that pattern seems a little odd.
There was one more problems in the dotnet platform and/or msbuild causing this to fail.
One of those was this https://github.com/dotnet/project-system/issues/4736
Installing SDK 3.0.100-preview7-012821 or better solved the problem

How can I load or run multiple SQL files in Datagrip?

I have set up a project in DataGrip with several sql files spread over a couple of directories like this:
My hope is to manage the complexity as this turns into hundreds of files. This is a learning/proof of concept level effort right now.
What I want to do is have a way to run/build/publish this project but at present the best I have found is to select the files and then do a "Run Files" CTRL+SHIFT+F10. This worked for a bit but now I have a foreign key that gets run in the wrong order. I don't want to have to make a hack like prefixing the file names with integers to force a specific order. It feels like a real kludge.
How should I accomplish this, I must have missed something since the alternative is very manual and error prone. If it matters the database I am working against is Oracle.
Since DataGrip 2020.1 one can create a Run Configuration and specify data source and multiple files or scripts:
Refer to DataGrip blog post.

SSDT - Build Deployment Script without dacpac

I've got a question about building a deployment script using SSDT.
Could anyone tell me if it's possible to build a deployment script using SQLPackage.exe where the source file is NOT a dacpac file, but uses the .sql files instead?
To give some background, I've created a project in Visual Studio 2012 for my database schema. This works great, and SSDT builds the folder structure without a problem (functions, stored procedures etc which contain all the .sql files).
Here's the problem - the database in question is from a legacy system, and is riddled with errors. Most of these errors we don't care about anymore and it's not practical or safe to fix them all, so for years we've basically ignored them. However it means we can't build the project and therefore can't generate the dacpac file. Now this doesn't prevent us from doing the schema compare and syncing the database with the file system (a local mercurial repository). However it does seemingly prevent us from building a deployment script.
What I'm looking for is a way of building the deployment script using SQLPackage.exe without having to generate the dacpac file. I need to use the .sql files in the file system instead. Visual Studio will produce a script of the differences without building the dacpac, so this makes me think it must be possible to do it using SQLPackage.exe using one of the parameters.
Here's an example of SQLPackage.exe which I'd like to adapt to use the .sql files instead of the dacpac:
sqlpackage.exe /Action:Script /SourceFile:"E:\SourceControl\Project\Database
\test_SSDTProject\bin\Debug\test_SSDTProject.dacpac" /TargetConnectionString:"Data
Source=local;Initial Catalog=TestDB;User ID=abc;Password=abc" /OutputPath:"C:
\temp\hbupdate.sql" /OverwriteFiles:true /p:IgnoreExtendedProperties=True
/p:IgnorePermissions=True /p:IgnoreRoleMembership=True /p:DropObjectsNotInSource=True
This works fine because it uses the dacpac file. However I need to point it at the folder structure where the .sql files are instead.
Any help would be great.
As has been suggested in comments, I think that biting the bullet and fixing the errors is the way ahead. You say
it's not practical or safe to fix them all,
but I think you should give this a bit more thought. I have recently been in a similar situation to you, and the key to emerging from it is to realise that the operational risk associated with dropping procedures and functions that will throw an exception as soon as they are called is zero.
Note that this does not apply if the reason these objects won't build is that they contain cross-database or cross-server references that are present in production but not in your project; this is a separate problem altogether, but also a solvable one.
Nor am I in favour of "exclude from build" as an alternative to "delete"; a while ago I saw a project where this technique had been deployed extensively; it makes it harder to see what does what from the source files and I am now of the opinion that "Build Action=None" is simply "commenting out the bits that don't work" for the Snapchat generation.
The key to all of this, of course, is source control. This addresses the residual risk that one day you might indeed want to implement a working version of one of your currently non-working procedures, using the non-working code as a starting point. It also obviates the need to keep stuff hanging around in the solution using Build Action=None, as one can simply summon an earlier revision of the code that contained the offending objects.
If my experience is any guide, 60 build errors is nothing; these could easily be caused by references to three or four objects that no longer exists, and can be consigned to the dustbin of source control with some enthusiastic use of the "Delete" key.
Do you have a copy of SQL Compare at your disposal? If not, it might be worth downloading the trial to see if it will work in your scenario.
Here are the available switches:
http://documentation.red-gate.com/display/SC10/Switches+used+in+the+command+line
At the very least you'll need to specify the following:
/scripts1:
/server2:
/database2:
/ScriptFile:

Is AssemblyInfo.cpp necessary?

I want to remove AssemblyInfo.cpp, because of some metadata errors that sometimes come up.
Is AssemblyInfo.cpp useful for anything? Or can it be removed without any problem?
I've discovered one distinction for this file: it has to do with values reported under calls to Assembly.GetReferencedAssemblies. I was working on tracking version numbers of our binaries from our SVN repository by embedding the revision numbers into them. Initially I too was updating AssemblyInfo.cpp and found nothing reported in the file property details tab for the binary. It seemed this file did nothing for me in terms of updating those details, which was not the case with similar updates to a csproj's AssemblyInfo.cs. Why the difference right?
Now in one such csproj we happen to reference a vcxproj and that csproj dumps to a log the versions of all its referenced assemblies using the .NET Assembly.GetReferencedAssemblies method. What I discovered was that the number that was being reported in that log was not the vcxproj's version as given by the VS_VERSIONINFO resource I added (which does get the version details into the file properties details tab). Instead the number reported was actually matching that defined in the AssemblyInfo.cpp.
So for vcxproj files it looks like VS_VERSIONINFO is capable of updating the contents you find under the file properties details tab but AssemblyInfo.cpp is capable of exposing the version to GetReferencedAssemblies. In C# these two areas of reporting seem to be unified. Maybe there's a way to direct AssemblyInfo.cpp to propagate into the file details in some fashion, but what I'm going to wind up doing is duplicating the build info to both locations in a prebuild step. Maybe someone can find a better approach.
So far I never had the AssemblyInfo.cpp in my managed c++ dlls, so I don't think it is necessary.
(I just added the file to have version information for my c++ dlls).
Why not just fix the errors? On that note, what errors are you getting?
This file provides information such as a version number which is definitely needed in order to use the assembly you have built.