NHibernate - Cannot set the ConfigurationCache property after calling Init - nhibernate

In my S#arp Arch 2.0 project, I'm communicating with 2 databases. This runs fine locally with the ASP.Net Development Server (VS 2010) and passes unit tests requiring talking to either database.
Next step was to Publish the project (using VS' built-in "Publish" menu option) to the in-house development server (Windows Server 2008 R2) and fire this thing up on a real server where people could actually see it.
Now I get the exception shown in the title when I try to run the application. The exception is thrown at the = new NHibernateConfigurationFileCache() line below:
private void InitialiseNHibernateSessions()
{
NHibernateSession.ConfigurationCache = new NHibernateConfigurationFileCache();
NHibernateSession.InitStorage(this.webSessionStorage);
NHibernateSession.AddConfiguration(NHibernateSession.DefaultFactoryKey,
new[] { Server.MapPath("~/bin/SRN2.Infrastructure.dll") },
new AutoPersistenceModelGenerator().Generate(),
Server.MapPath("~/NHibernate.config"),
null, null, null);
NHibernateSession.AddConfiguration(SRN2.Infrastructure.DataGlobals.OTHER_DB_FACTORY_KEY,
new string[] { Server.MapPath("~/bin/SRN2.Infrastructure.dll") },
new AutoPersistenceModelGenerator().Generate(),
Server.MapPath("~/NHibernate-OTHER.config"),
null, null, null);
}
Stack trace:
[InvalidOperationException: Cannot set the ConfigurationCache property after calling Init]
SharpArch.NHibernate.NHibernateSession.set_ConfigurationCache(INHibernateConfigurationCache value) +105
SRN2.Web.Mvc.MvcApplication.InitialiseNHibernateSessions() in C:\code\SRN2-Sharp2\trunk\Solutions\SRN2.Web.Mvc\Global.asax.cs:122
SharpArch.NHibernate.NHibernateInitializer.InitializeNHibernateOnce(Action initMethod) +116
SRN2.Web.Mvc.MvcApplication.Application_BeginRequest(Object sender, EventArgs e) in C:\code\SRN2-Sharp2\trunk\Solutions\SRN2.Web.Mvc\Global.asax.cs:71
System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +148
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75

Jon is right, it does sound like your InitialiseNHibernateSessions method is being called multiple times. You don't have to use the config cache, have you tried disabling it?
The NHibernate configuration is cached to file in order to improve start up time. If the configuration has not changed it is loaded from the cache file. The default location of the cache file is the system temporary file folder (e.g. Path.GetTempPath()).
If you don't have file permissions, or don't need config caching, just remove or comment out the line that initialises the configuration cache, i.e. this line:
NHibernateSession.ConfigurationCache = new NHibernateConfigurationFileCache();

This error occurred, for one application, roughly every month or two months for an extended period. I haven't really been able to fix this permanently but I did discover that the following procedure resolves the error for:
Delete the temporary configuration cache files; they should be located in the Windows temporary files folder and there should be a file for each database (e.g. DatabaseName--1973822310.bin) and another named something like nhibernate.current_session--1973822310.bin.
Restart the IIS web site for your application.
Recycle the application pool for your application.

Related

Unable to upgrade/import a dacpac file

ORIGINAL QUESTION:
Trying to upgrade a blank database created in a test VM using a .dacpac file, but get the following error message:
Error SQL72014: .Net SqlClient Data Provider: Msg 15401, Level 16, State 1, Line 1 Windows NT user or group 'SOURCE_DOMAIN\SOURCE SQL Readers' not found. Check the name again.
Error SQL72045: Script execution error. The executed script:
CREATE LOGIN [SOURCE_DOMAIN\SOURCE SQL Readers]
FROM WINDOWS WITH DEFAULT_LANGUAGE = [us_english];
(Microsoft.SqlServer.Dac)
------------------------------
Program Location:
at Microsoft.SqlServer.Dac.DeployOperation.ThrowIfErrorManagerHasErrors()
at Microsoft.SqlServer.Dac.DeployOperation.<>c__DisplayClass14.<>c__DisplayClass16.<CreatePlanExecutionOperation>b__13()
at Microsoft.Data.Tools.Schema.Sql.Dac.OperationLogger.Capture(Action action)
at Microsoft.SqlServer.Dac.DeployOperation.<>c__DisplayClass14.<CreatePlanExecutionOperation>b__12(Object operation, CancellationToken token)
at Microsoft.SqlServer.Dac.Operation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.ReportMessageOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.CompositeOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.CompositeOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.DeployOperation.Microsoft.SqlServer.Dac.IOperation.Run(OperationContext context)
at Microsoft.SqlServer.Dac.OperationExtension.Execute(IOperation operation, DacLoggingContext loggingContext, CancellationToken cancellationToken)
at Microsoft.SqlServer.Dac.DacServices.InternalDeploy(IPackageSource packageSource, Boolean isDacpac, String targetDatabaseName, DacDeployOptions options, CancellationToken cancellationToken, DacLoggingContext loggingContext, Action`3 reportPlanOperation, Boolean executePlan)
at Microsoft.SqlServer.Dac.DacServices.Deploy(DacPackage package, String targetDatabaseName, Boolean upgradeExisting, DacDeployOptions options, Nullable`1 cancellationToken)
at Microsoft.SqlServer.Management.Dac.DacWizard.UpgradeModel.RunAction()
at Microsoft.SqlServer.Management.Dac.DacWizard.ExecuteDacPage.backgroundWorker1_DoWork(Object sender, DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
Assuming that user existed in the source, but not in the destination. Will creating that user on the VM fix this issue or will I need to use a different approach to get the schema data from the source re-created in a VM destination for testing purposes?
UPDATE TO QUESTION 1:
The .dacpac file is generated on a server which is on a totally different domain and it will not be possible for the test VM to ever be on the same domain. With that in mind, how do I get the .dacpac file to work on the test VM?
If you still have access to VM, you could generate .dacpac again this time ignoring the logins. Depending which tool you use you should have access to option like "Include User Login Mapping".
The most roboust one has the VS: "How to create DACPAC file?" by Kamil Nowinski:
Image source: https://sqlplayer.net/wp-content/uploads/2018/10/visual-studio-extract-dacpac-options.png
You could recreate the proper logins and users afterwards with own SQL script.
Related: Using Publish Profiles to Deploy a DACPAC Database Without User Accounts
The solution to this problem lies in defining an appropriate publish profile for your DACPAC, which then instructs your chosen deployment tool – SQLPackage.exe, Visual Studio, or Azure DevOps – on how to carry out the deployment
The profile is defined as an XML file.
ExcludeUsers
ExcludeLogins
ExcludeDatabaseRoles
By setting these options to True within our publish profile, creation or modification of these objects will be skipped entirely during any database deployment.
One more option is to use dbtools.io - Export-DbaDacPackage
Key point here is:
$exportProperties = "/p:IgnorePermissions=True /p:IgnoreUserLoginMappings=True" # Ignore
and publish.xml:
...
<ExcludeLogins>True</ExcludeLogins>
<IgnorePermissions>True</IgnorePermissions>
<IgnoreLoginSids>True</IgnoreLoginSids>
<IgnoreRoleMembership>True</IgnoreRoleMembership>
Summary:
create a dacpac without login
create a publish.xml file that will ignore permissions
Creating the user inside the VM is one way to solve this issue, but you will need to change 'SOURCE_DOMAIN' to the VM hostname, as the user will be part of the local user database.
Probably the best solution is to fix VM communication to the Domain Controller, so authentication will work and user accounts end up being actually visible within the VM.
Take a look at this,
This error usually occurs because of COMPATIBILITY_LEVEL
I would recommend trying this quarry out:
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = 130;
Hope it helps!
If a dacpac contains users or groups that aren’t on the domain where the dacpac is being deployed, then one way to deploy it is using the SqlPackage command line tool, as this allows you to explicitly list the object types you want to exclude.
To exclude users and groups, the PowerShell command would be something like this:
.\SqlPackage.exe `
/a:Publish `
/tsn:"(localdb)\mssqllocaldb" `
/tdn:YourDatabaseName `
/p:ExcludeObjectTypes="Users;RoleMembership;Logins;ServerRoles;ServerRoleMembership;Permissions" `
/sf:YourFile.dacpac
This command uses the following switches:
/a (Action): the action to run, in this case Publish
/tsn (TargetServerName): the name of the server to deploy to
/tdn (TargetDatabaseName): the name of the database to deploy to
/p (Properties): name value pair of action-specific properties, in this case:
ExcludeObjectTypes: a semicolon-delimited list of object types that should be ignored
/sf (SourceFile): the dacpac file to deploy
More details of the syntax for Publish (including a list of the object types that can be exlcuded) are available in the docs for the publish action.

Sql INSERT statement in VB application doesn't insert data [duplicate]

I have following C# code in a console application.
Whenever I debug the application and run the query1 (which inserts a new value into the database) and then run query2 (which displays all the entries in the database), I can see the new entry I inserted clearly. However, when I close the application and check the table in the database (in Visual Studio), it is gone. I have no idea why it is not saving.
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlServerCe;
using System.Data;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
try
{
string fileName = "FlowerShop.sdf";
string fileLocation = "|DataDirectory|\\";
DatabaseAccess dbAccess = new DatabaseAccess();
dbAccess.Connect(fileName, fileLocation);
Console.WriteLine("Connected to the following database:\n"+fileLocation + fileName+"\n");
string query = "Insert into Products(Name, UnitPrice, UnitsInStock) values('NewItem', 500, 90)";
string res = dbAccess.ExecuteQuery(query);
Console.WriteLine(res);
string query2 = "Select * from Products";
string res2 = dbAccess.QueryData(query2);
Console.WriteLine(res2);
Console.ReadLine();
}
catch (Exception e)
{
Console.WriteLine(e);
Console.ReadLine();
}
}
}
class DatabaseAccess
{
private SqlCeConnection _connection;
public void Connect(string fileName, string fileLocation)
{
Connect(#"Data Source=" + fileLocation + fileName);
}
public void Connect(string connectionString)
{
_connection = new SqlCeConnection(connectionString);
}
public string QueryData(string query)
{
_connection.Open();
using (SqlCeDataAdapter da = new SqlCeDataAdapter(query, _connection))
using (DataSet ds = new DataSet("Data Set"))
{
da.Fill(ds);
_connection.Close();
return ds.Tables[0].ToReadableString(); // a extension method I created
}
}
public string ExecuteQuery(string query)
{
_connection.Open();
using (SqlCeCommand c = new SqlCeCommand(query, _connection))
{
int r = c.ExecuteNonQuery();
_connection.Close();
return r.ToString();
}
}
}
EDIT: Forgot to mention that I am using SQL Server Compact Edition 4 and VS2012 Express.
It is a quite common problem. You use the |DataDirectory| substitution string. This means that, while debugging your app in the Visual Studio environment, the database used by your application is located in the subfolder BIN\DEBUG folder (or x86 variant) of your project. And this works well as you don't have any kind of error connecting to the database and making update operations.
But then, you exit the debug session and you look at your database through the Visual Studio Server Explorer (or any other suitable tool). This window has a different connection string (probably pointing to the copy of your database in the project folder). You search your tables and you don't see the changes.
Then the problem get worse. You restart VS to go hunting for the bug in your app, but you have your database file listed between your project files and the property Copy to Output directory is set to Copy Always. At this point Visual Studio obliges and copies the original database file from the project folder to the output folder (BIN\DEBUG) and thus your previous changes are lost.
Now, your application inserts/updates again the target table, you again can't find any error in your code and restart the loop again until you decide to post or search on StackOverflow.
You could stop this problem by clicking on the database file listed in your Solution Explorer and changing the property Copy To Output Directory to Copy If Newer or Never Copy. Also you could update your connectionstring in the Server Explorer to look at the working copy of your database or create a second connection. The first one still points to the database in the project folder while the second one points to the database in the BIN\DEBUG folder. In this way you could keep the original database ready for deployment purposes and schema changes, while, with the second connection you could look at the effective results of your coding efforts.
EDIT Special warning for MS-Access database users. The simple act of looking at your table changes the modified date of your database ALSO if you don't write or change anything. So the flag Copy if Newer kicks in and the database file is copied to the output directory. With Access better use Copy Never.
Committing changes / saving changes across debug sessions is a familiar topic in SQL CE forums. It is something that trips up quite a few people. I'll post links to source articles below, but I wanted to paste the answer that seems to get the best results to the most people:
You have several options to change this behavior. If your sdf file is part of the content of your project, this will affect how data is persisted. Remember that when you debug, all output of your project (including the sdf) if in the bin/debug folder.
You can decide not to include the sdf file as part of your project and manage the file location runtime.
If you are using "copy if newer", and project changes you make to the database will overwrite any runtime/debug changes.
If you are using "Do not copy", you will have to specify the location in code (as two levels above where your program is running).
If you have "Copy always", any changes made during runtime will always be overwritten
Answer Source
Here is a link to some further discussion and how to documentation.

Using Kentico API from LINQPad is throwing an exception

I am trying to call Kentico API from LINQPad, but getting the following exception:
[AbstractProvider.GetProvider]: The object type 'cms.document' is missing the provider type configuration
My code is:
void Main()
{
var pages = DocumentHelper.GetDocuments("CMS.MenuItem").Path("/", PathTypeEnum.Children);
pages.Dump();
}
Note: I tested the code from Visual Studio, it works, but not from LINQPad.
The problem is that during the initial discovery Kentico looks only at the following paths:
AppDomain.CurrentDomain.BaseDirectory
AppDomain.CurrentDomain.RelativeSearchPath
Which in case of LINQPad are C:\Program Files (x86)\LINQPad4\ and null. Therefore the providers do not get resolved.
I've tried running the code in a new AppDomain but it doesn't seem to work in LINQPad. I suggest submitting this to Kentico as an idea or an issue.
A workaround to this would be copying the LINQPad executable to a location of Kentico DLLs - e.g. C:\inetpub\wwwroot\Kentico82\Lib. That works just fine.
Update (thx to Joe Albahari):
If you wrap your code in this:
var appDomain = Util.CreateAppDomain ("AD", null, new AppDomainSetup
{
PrivateBinPath = #"C:\inetpub\wwwroot\Kentico82\CMS\bin",
});
appDomain.DoCallBack(() => { /* your code */ });
you'll be able to execute it. However, you can't Dump() it to the output window. But you can write it to a text file for example. If you experience the following error:
FileNotFoundException: Could not load file or assembly 'LINQPad, Version=1.0.0.0, Culture=neutral, PublicKeyToken=21353812cd2a2db5' or one of its dependencies. The system cannot find the file specified.
Go to Edit -> Preferences -> Advanced -> Run each query in its own process and turn it off.

Broken WF4 workflow rehydration

Consider a WF4 project running in IIS, with a single workflow definition (xamlx) and a SqlInstanceStore for persistence.
Instead of hosting the xamlx directly, we host a WorkflowServiceHostFactory which spins up a dedicated WorkflowServiceHost on a seperate endpoint for every customer.
This has been running fine for a while, until we required a new version of the workflow definition, so now on top of Flow.xamlx I have Flow1.xamlx.
Since all interactions with the workflow service are wrapped with business logic which is smart enough to identify the required version, this homebrew versioning works fine for newly started workflows (both on Flow.xamlx and Flow1.xamlx).
However, workflows started before this change fail to be reactivated (on a post the servicehost throws an UnknownMessageReceived exception).
Since WF isn't overly verbose in telling you WHY it can't reactivate the workflow (wrong version, instance not found, lock, etc), we attached a SQL profiler to the database.
It turns out the 'WorkflowServiceType' the WorkflowServiceHost uses in its queries is different from the stored instances' WorkflowServiceType. Likely this is why it fails to detect the persisted instance.
Since I'm pretty sure I instance the same xamlx, I can't understand where this value is coming from. What parameters go into the calculation of this Guid, does the environment matter (sitename), and what can I do to reactivate the workflow ?
In the end I decompiled System.Activities.DurableInstancing. The only setter for WorkflowHostType on SqlWorkflowInstanceStore was in ExtractWorkflowHostType:
private void ExtractWorkflowHostType(IDictionary<XName, InstanceValue> commandMetadata)
{
InstanceValue instanceValue;
if (commandMetadata.TryGetValue(WorkflowNamespace.WorkflowHostType, out instanceValue))
{
XName xName = instanceValue.Value as XName;
if (xName == null)
{
throw FxTrace.Exception.AsError(new InstancePersistenceCommandException(SR.InvalidMetadataValue(WorkflowNamespace.WorkflowHostType, typeof(XName).Name)));
}
byte[] bytes = Encoding.Unicode.GetBytes(xName.ToString());
base.Store.WorkflowHostType = new Guid(HashHelper.ComputeHash(bytes));
this.fireRunnableInstancesEvent = true;
}
}
I couldn't clearly disentangle the calling code path, so I had to find out at runtime by attaching WinDbg/SOS to IIS and breaking on HashHelper.ComputeHash.
I was able to retreive the XName that goes into the hash calculation, which has a localname equal to the servicefile, and a namespace equal to the [sitename]/[path]/.
In the end the WorkflowHostType calculation comes down to:
var xName = XName.Get("Flow.xamlx.svc", "/examplesite/WorkflowService/1/");
var bytes = Encoding.Unicode.GetBytes(xName.ToString());
var WorkflowHostType = new Guid(HashHelper.ComputeHash(bytes));
Bottomline: apparently workflows can only be rehydrated when the service filename, sitename and path are all identical (case sensitive) as when they were started

Executing untrusted code

I'm building a C# application which uses plug-ins. The application must guarantee to the user that plug-ins will not do whatever they want on the user machine, and will have less privileges that the application itself (for example, the application can access its own log files, whereas plug-ins cannot).
I considered three alternatives.
Using System.AddIn. I tried this alternative first, because it seamed much powerful, but I'm really disappointed by the need of modifying the same code seven times in seven different projects each time I want to modify something. Besides, there is a huge number of problems to solve even for a simple Hello World application.
Using System.Activator.CreateInstance(assemblyName, typeName). This is what I used in the preceding version of the application. I can't use it nevermore, because it does not provide a way to restrict permissions.
Using System.Activator.CreateInstance(AppDomain domain, [...]). That's what I'm trying to implement now, but it seems that the only way to do that is to pass through ObjectHandle, which requires serialization for every used class. Although plug-ins contain WPF UserControls, which are not serializable.
So is there a way to create plug-ins containing UserControls or other non serializable objects and to execute those plug-ins with a custom PermissionSet ?
One thing you could do is set the current AppDomain's policy level to a restricted permission set and add evidence markers to restrict based on strong name or location. The easiest would probably be to require plugins are in a specific directory and give them a restrictive policy.
e.g.
public static void SetRestrictedLevel(Uri path)
{
PolicyLevel appDomainLevel = PolicyLevel.CreateAppDomainLevel();
// Create simple root policy normally with FullTrust
PolicyStatement fullPolicy = new PolicyStatement(appDomainLevel.GetNamedPermissionSet("FullTrust"));
UnionCodeGroup policyRoot = new UnionCodeGroup(new AllMembershipCondition(), fullPolicy);
// Build restrictred permission set
PermissionSet permSet = new PermissionSet(PermissionState.None);
permSet.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution));
PolicyStatement permissions = new PolicyStatement(permSet, PolicyStatementAttribute.Exclusive);
policyRoot.AddChild(new UnionCodeGroup(new UrlMembershipCondition(path.ToString()), permissions));
appDomainLevel.RootCodeGroup = policyRoot;
AppDomain.CurrentDomain.SetAppDomainPolicy(appDomainLevel);
}
static void RunPlugin()
{
try
{
SetRestrictedLevel(new Uri("file:///c:/plugins/*"));
Assembly a = Assembly.LoadFrom("file:///c:/plugins/ClassLibrary.dll");
Type t = a.GetType("ClassLibrary.TestClass");
/* Will throw an exception */
t.InvokeMember("DoSomething", BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Static,
null, null, null);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
Of course this isn't rigorously tested and CAS policy is notoriously complex so there is always a risk that this code might allow some things to bypass the policy, YMMV :)