Unable to upgrade/import a dacpac file - sql

ORIGINAL QUESTION:
Trying to upgrade a blank database created in a test VM using a .dacpac file, but get the following error message:
Error SQL72014: .Net SqlClient Data Provider: Msg 15401, Level 16, State 1, Line 1 Windows NT user or group 'SOURCE_DOMAIN\SOURCE SQL Readers' not found. Check the name again.
Error SQL72045: Script execution error. The executed script:
CREATE LOGIN [SOURCE_DOMAIN\SOURCE SQL Readers]
FROM WINDOWS WITH DEFAULT_LANGUAGE = [us_english];
(Microsoft.SqlServer.Dac)
------------------------------
Program Location:
at Microsoft.SqlServer.Dac.DeployOperation.ThrowIfErrorManagerHasErrors()
at Microsoft.SqlServer.Dac.DeployOperation.<>c__DisplayClass14.<>c__DisplayClass16.<CreatePlanExecutionOperation>b__13()
at Microsoft.Data.Tools.Schema.Sql.Dac.OperationLogger.Capture(Action action)
at Microsoft.SqlServer.Dac.DeployOperation.<>c__DisplayClass14.<CreatePlanExecutionOperation>b__12(Object operation, CancellationToken token)
at Microsoft.SqlServer.Dac.Operation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.ReportMessageOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.CompositeOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.CompositeOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.DeployOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.Execute(IOperation operation, DacLoggingContext loggingContext, CancellationToken cancellationToken)
at Microsoft.SqlServer.Dac.DacServices.InternalDeploy(IPackageSource packageSource, Boolean isDacpac, String targetDatabaseName, DacDeployOptions options, CancellationToken cancellationToken, DacLoggingContext loggingContext, Action`3 reportPlanOperation, Boolean executePlan)
at Microsoft.SqlServer.Dac.DacServices.Deploy(DacPackage package, String targetDatabaseName, Boolean upgradeExisting, DacDeployOptions options, Nullable`1 cancellationToken)
at Microsoft.SqlServer.Management.Dac.DacWizard.UpgradeModel.RunAction()
at Microsoft.SqlServer.Management.Dac.DacWizard.ExecuteDacPage.backgroundWorker1_DoWork(Object sender, DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
Assuming that user existed in the source, but not in the destination. Will creating that user on the VM fix this issue or will I need to use a different approach to get the schema data from the source re-created in a VM destination for testing purposes?
UPDATE TO QUESTION 1:
The .dacpac file is generated on a server which is on a totally different domain and it will not be possible for the test VM to ever be on the same domain. With that in mind, how do I get the .dacpac file to work on the test VM?

If you still have access to VM, you could generate .dacpac again this time ignoring the logins. Depending which tool you use you should have access to option like "Include User Login Mapping".
The most roboust one has the VS: "How to create DACPAC file?" by Kamil Nowinski:
Image source: https://sqlplayer.net/wp-content/uploads/2018/10/visual-studio-extract-dacpac-options.png
You could recreate the proper logins and users afterwards with own SQL script.
Related: Using Publish Profiles to Deploy a DACPAC Database Without User Accounts
The solution to this problem lies in defining an appropriate publish profile for your DACPAC, which then instructs your chosen deployment tool – SQLPackage.exe, Visual Studio, or Azure DevOps – on how to carry out the deployment
The profile is defined as an XML file.
ExcludeUsers
ExcludeLogins
ExcludeDatabaseRoles
By setting these options to True within our publish profile, creation or modification of these objects will be skipped entirely during any database deployment.
One more option is to use dbtools.io - Export-DbaDacPackage
Key point here is:
$exportProperties = "/p:IgnorePermissions=True /p:IgnoreUserLoginMappings=True" # Ignore
and publish.xml:
...
<ExcludeLogins>True</ExcludeLogins>
<IgnorePermissions>True</IgnorePermissions>
<IgnoreLoginSids>True</IgnoreLoginSids>
<IgnoreRoleMembership>True</IgnoreRoleMembership>
Summary:
create a dacpac without login
create a publish.xml file that will ignore permissions

Creating the user inside the VM is one way to solve this issue, but you will need to change 'SOURCE_DOMAIN' to the VM hostname, as the user will be part of the local user database.
Probably the best solution is to fix VM communication to the Domain Controller, so authentication will work and user accounts end up being actually visible within the VM.

Take a look at this,
This error usually occurs because of COMPATIBILITY_LEVEL
I would recommend trying this quarry out:
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = 130;
Hope it helps!

If a dacpac contains users or groups that aren’t on the domain where the dacpac is being deployed, then one way to deploy it is using the SqlPackage command line tool, as this allows you to explicitly list the object types you want to exclude.
To exclude users and groups, the PowerShell command would be something like this:
.\SqlPackage.exe `
/a:Publish `
/tsn:"(localdb)\mssqllocaldb" `
/tdn:YourDatabaseName `
/p:ExcludeObjectTypes="Users;RoleMembership;Logins;ServerRoles;ServerRoleMembership;Permissions" `
/sf:YourFile.dacpac
This command uses the following switches:
/a (Action): the action to run, in this case Publish
/tsn (TargetServerName): the name of the server to deploy to
/tdn (TargetDatabaseName): the name of the database to deploy to
/p (Properties): name value pair of action-specific properties, in this case:
ExcludeObjectTypes: a semicolon-delimited list of object types that should be ignored
/sf (SourceFile): the dacpac file to deploy
More details of the syntax for Publish (including a list of the object types that can be exlcuded) are available in the docs for the publish action.

Related

Why is sql script not valid when referenced by post deploy

I have a Script.PostDeployment.sql which references DataFactoryRights.sql
ScriptPostDeployment.sql, build action PostDeploy:
:r .\DataFactoryRights.sql
DataFactoryRights.sql, build action None:
CREATE USER [dataFactory] FROM EXTERNAL PROVIDER;
When I try to build, I get the error:
Severity Code Description Project File Line Suppression State
Error SQL72007: The syntax check failed 'Incorrect syntax near FROM.' in the batch near:
'CREATE USER [dataFactory] FROM EXTERNAL PROVIDER;'
If I comment out like --:r .\DataFactoryRights.sql, it builds ok
The error is caused by the T-SQL action:
CREATE USER [dataFactory] FROM EXTERNAL PROVIDER;
Data Factory is service principals.
Microsoft MSFT confirmed that:
"We currently don't allow service principals to create users in SQL DB. We will update the documentation when this feature is available. Thanks!"
That's why you comment out like --:r .\DataFactoryRights.sql, it builds ok.
Ref:https://github.com/MicrosoftDocs/sql-docs/issues/2323#issuecomment-595417579
Hope this helps.

Setting user credentials on aws instance using jclouds

I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);

Adding a TFS server group to access levels via command line

I am creating a group of users within TFS 2013 and I want to add them to the none default access level (ex. the full access group) but I noticed I am only able to do this through the web interface by adding a TFS Group under that certain level. I am wondering if there is a way to do this via the developer tool (command line) as everything I am doing is being done in a batch script.
Any input would be appreciated. Thanks!
Create 3 TFS server groups; add these groups to the different access levels (e.g. TFS_ACCESS_LEVEL_(NONE|STANDARD|FULL)). Now use the TFSSecurity commandline tool to add groups to these existing and mapped groups(tfssecurity /g+ TFS_ACCESS_LEVEL_NONE GroupYouWantToHaveThisAccessLevel). There is no other way to directly add people to the access levels, except probably through the Object Model using C#.
For the record, tfssecurity may require the URI, which can be obtained via API. This is easy to do in Powershell, here is how to create a TFS group
[psobject] $tfs = get-tfs -serverName $collection
$projectUri = ($tfs.CSS.ListAllProjects() | where { $_.Name -eq $project }).Uri
& $TFSSecurity /gc $projectUri $groupName $groupDescription /collection:$collection
Full script at TfsSecurity wrapper.

NHibernate - Cannot set the ConfigurationCache property after calling Init

In my S#arp Arch 2.0 project, I'm communicating with 2 databases. This runs fine locally with the ASP.Net Development Server (VS 2010) and passes unit tests requiring talking to either database.
Next step was to Publish the project (using VS' built-in "Publish" menu option) to the in-house development server (Windows Server 2008 R2) and fire this thing up on a real server where people could actually see it.
Now I get the exception shown in the title when I try to run the application. The exception is thrown at the = new NHibernateConfigurationFileCache() line below:
private void InitialiseNHibernateSessions()
{
NHibernateSession.ConfigurationCache = new NHibernateConfigurationFileCache();
NHibernateSession.InitStorage(this.webSessionStorage);
NHibernateSession.AddConfiguration(NHibernateSession.DefaultFactoryKey,
new[] { Server.MapPath("~/bin/SRN2.Infrastructure.dll") },
new AutoPersistenceModelGenerator().Generate(),
Server.MapPath("~/NHibernate.config"),
null, null, null);
NHibernateSession.AddConfiguration(SRN2.Infrastructure.DataGlobals.OTHER_DB_FACTORY_KEY,
new string[] { Server.MapPath("~/bin/SRN2.Infrastructure.dll") },
new AutoPersistenceModelGenerator().Generate(),
Server.MapPath("~/NHibernate-OTHER.config"),
null, null, null);
}
Stack trace:
[InvalidOperationException: Cannot set the ConfigurationCache property after calling Init]
SharpArch.NHibernate.NHibernateSession.set_ConfigurationCache(INHibernateConfigurationCache value) +105
SRN2.Web.Mvc.MvcApplication.InitialiseNHibernateSessions() in C:\code\SRN2-Sharp2\trunk\Solutions\SRN2.Web.Mvc\Global.asax.cs:122
SharpArch.NHibernate.NHibernateInitializer.InitializeNHibernateOnce(Action initMethod) +116
SRN2.Web.Mvc.MvcApplication.Application_BeginRequest(Object sender, EventArgs e) in C:\code\SRN2-Sharp2\trunk\Solutions\SRN2.Web.Mvc\Global.asax.cs:71
System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +148
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75
Jon is right, it does sound like your InitialiseNHibernateSessions method is being called multiple times. You don't have to use the config cache, have you tried disabling it?
The NHibernate configuration is cached to file in order to improve start up time. If the configuration has not changed it is loaded from the cache file. The default location of the cache file is the system temporary file folder (e.g. Path.GetTempPath()).
If you don't have file permissions, or don't need config caching, just remove or comment out the line that initialises the configuration cache, i.e. this line:
NHibernateSession.ConfigurationCache = new NHibernateConfigurationFileCache();
This error occurred, for one application, roughly every month or two months for an extended period. I haven't really been able to fix this permanently but I did discover that the following procedure resolves the error for:
Delete the temporary configuration cache files; they should be located in the Windows temporary files folder and there should be a file for each database (e.g. DatabaseName--1973822310.bin) and another named something like nhibernate.current_session--1973822310.bin.
Restart the IIS web site for your application.
Recycle the application pool for your application.

WebSharingAppDemo-CEProviderEndToEnd Queries peerProvider for NeedsScope before any files are batched to the server. This seems out of order?

I'm building an application based on the WebSharingAppDemo-CEProviderEndToEnd. When I deploy the server portion on a server, the code gives the error "The path is not valid. Check the directory for the database." during the call to NeedsScope() in the CeWebSyncService.cs file.
Obviously the server can't access the client's sdf but what is supposed to happen to make this work? The app uses batching to send the data and the batches have to be marshalled across to the temp directory but this problem is occurring before any files have been batched over. There is nothing for the server to look at to determine whether the peerProivider needs scope. What am I missing?
public bool NeedsScope()
{
Log("NeedsSchema: {0}", this.peerProvider.Connection.ConnectionString);
SqlCeSyncScopeProvisioning prov = new SqlCeSyncScopeProvisioning();
return !prov.ScopeExists(this.peerProvider.ScopeName, (SqlCeConnection)this.peerProvider.Connection);
}
I noticed that the sample was making use of a proxy to speak w/ the CE file but a provider (not a proxy) to speak w/ the sql server.
I switched it so there is a proxy to reach the SQL server and a provider to access the CE file.
That seems to work for me.
stats = synchronizationHelper.SynchronizeProviders(srcProvider, destinationProxy);
vs.
SyncOperationStatistics stats = syncHelper.SynchronizeProviders(srcProxy, destinationProvider);