config nhibernate for oracle and sql in same application - nhibernate

i need to create switchable web.config file which can configure N Hibernate to work with either MS-SQL Server or Oracle
I'm having 2 configuration files and one web.config files
sqlcongif.config
oracleconfig.config
if i want only SQL config i need to point to sqlconfigure.config in web.config
or else only oracle i need to point oracleconfig.config
tried with child config and other web options .
i found that similar problem when we are having connection strings in web config files
is there any options we can do this ?
thanks in advance :)

This issue could be easily solved with a switch in code (Oracle or SQL server), at the place where we ask for a ISessionFactory.
We just have to pass the config name and the connection string key:
public ISessionFactory BuildFactory(
string configFileName
string connectionStringKey)
{
// full path to the config
var fullfileName = System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory
, configFileName);
var document = System.Xml.Linq.XDocument.Load(fullfileName);
// NH configuration
var config = new NHibernate.Cfg.Configuration();
// feeded with all the setting in our config
using (var reader = document.CreateReader())
{
config.Configure(reader);
}
// read connection string from web.config
// ADVANTAGE: it could be even encrypted
var connectionStringFromWebConfig = ConfigurationManager
.ConnectionStrings[connectionStringKey].ConnectionString;
// use that web.config connection string
config.SetProperty("connection.connection_string", connectionStringFromWebConfig);
// create factory
NHibernate.ISessionFactory factory = config.BuildSessionFactory();
return factory;
}
And then we can call it like
var factory = BuildFactory("config/sqlconfigure.config", "sqlServerConnection")

Related

Dependency Injection Access While Configuring Service Registrations in asp.net Core (3+)

I have cases, where I want to configure services based on objects which are registered in the dependency injection container.
For example I have the following registration for WS Federation:
authenticationBuilder.AddWsFederation((options) =>{
options.MetadataAddress = "...";
options.Wtrealm = "...";
options.[...]=...
});
My goal in the above case is to use a configuration object, which is available via the DI container to configure the WsFederation-middleware.
It looks to me that IPostConfigureOptions<> is the way to go, but until now, I have not found a way to accomplish this.
How can this be done, or is it not possible?
See https://andrewlock.net/simplifying-dependency-injection-for-iconfigureoptions-with-the-configureoptions-helper/ for the I(Post)ConfigureOptions<T> way, but I find that way too cumbersome.
I generally use this pattern:
// Get my custom config section
var fooSettingsSection = configuration.GetSection("Foo");
// Parse it to my custom section's settings class
var fooSettings = fooSettingsSection.Get<FooSettings>()
?? throw new ArgumentException("Foo not configured");
// Register it for services who ask for an IOptions<FooSettings>
services.Configure<FooSettings>(fooSettings);
// Use the settings instance
services.AddSomeOtherService(options => {
ServiceFoo = fooSettings.ServiceFoo;
})
A little more explicit, but you have all your configuration and DI code in one place.
Of course this bypasses the I(Post)ConfigureOptions<T> entirely, so if there's other code that uses those interfaces to modify the FooSettings afterwards, my code won't notice it as it's reading directly from the configuration file. Given I control FooSettings and its users, that's no problem for me.
This should be the approach if you do want to use that interface:
First, register your custom config section that you want to pull the settings from:
var fooSettingsSection = configuration.GetSection("Foo");
services.Configure<FooSettings>(fooSettingsSection);
Then, create an options configurer:
public class ConfigureWSFedFromFooSettingsOptions
: IPostConfigureOptions<Microsoft.AspNetCore.Authentication.WsFederation.WsFederationOptions>
{
private readonly FooSettings _fooSettings;
public ConfigureWSFedFromFooSettingsOptions(IOptions<FooSettings> fooSettings)
{
_fooSettings = fooSettings.Value;
}
public void Configure(WsFederationOptions options)
{
options.MetadataAddress = _fooSettings.WsFedMetadataAddress;
options.Wtrealm = _fooSettings.WsFedWtRealm;
}
}
And finally link the stuff together:
services.AddTransient<IPostConfigureOptions<WsFederationOptions>, ConfigureWSFedFromFooSettingsOptions>();
The configurer will get your IOptions<FooSettings> injected, instantiated from the appsettings, and then be used to further configure the WsFederationOptions.

Improving scale out performance for multiple web instances using SignalR Redis Backplane

I have SignalR integrated in our application, and it has been working just fine.
Couple of days ago, due to some requirements, we had to support scale out of our application – and hence we opted for SignalR scale out using Redis.
However, since integration, the SignalR itself has stopped working, and the error we get is : NO TRANSPORT could be initialized successfully. try specifying a different transport or none at all for auto initialization.
Approaches applied :
- Tried with different versions of SignalR, as suggested online - Did not help
- Increased connection timeout – Did not help
Need some help in resolving this. Suggestion on using any other approach is also welcome.
[Update1] Adding code snippets
public class Startup
{
public void Configuration(IAppBuilder app)
{
// Any connection or hub wire up and configuration should go here
GlobalHost.DependencyResolver.UseRedis("server", port, "password", "AppName");
app.MapSignalR();
}
}
For more reference, I followed this link :https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-with-redis
[Update2]
public void Configuration(IAppBuilder app)
{
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(110);
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(30);
GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(10);
GlobalHost.Configuration.TransportConnectTimeout = TimeSpan.FromSeconds(45);
ConfigureAuth(app);
ConfigureSignalR(app);
// SignalR backplane code changes
string server = RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue(Constant.ConfigKeys.RedisCacheEndpoint) :
ConfigurationManager.AppSettings[Constant.ConfigKeys.RedisCacheEndpoint];
string port = RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue(Constant.ConfigKeys.RedisCachePort) :
ConfigurationManager.AppSettings[Constant.ConfigKeys.RedisCachePort];
string password = RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue(Constant.ConfigKeys.RedisCachePassword) :
ConfigurationManager.AppSettings[Constant.ConfigKeys.RedisCachePassword];
const string SIGNALR_REDIS_APPNAME = "Phoenix 2.0 Admin Tool";
string connectionString = server + ":" + Int32.Parse(port) + ";password=" + password + ",ssl=True,abortConnect=False";
RedisScaleoutConfiguration cfg = new RedisScaleoutConfiguration(connectionString, SIGNALR_REDIS_APPNAME);
GlobalHost.DependencyResolver.UseRedis(cfg);
app.MapSignalR();
}
We have an Azure AppService and are able to use SignalR w/ the Redis backplane. I did observe that things did not work properly depending on the connection string content. We used the RedisScaleoutConfiguration overload of the GlobalHost.DependencyResolver.UseRedis API instead of using the overload that you show.
Here is a block of code based on our working startup (values changed to protect the vulnerable):
const string SIGNALR_REDIS_APPNAME = "OurAppName";
string connectionString = "thename.redis.cache.windows.net:6380;password=somelongsecret,ssl=True,abortConnect=False";
RedisScaleoutConfiguration cfg = new RedisScaleoutConfiguration(connectionString, SIGNALR_REDIS_APPNAME);
GlobalHost.DependencyResolver.UseRedis(cfg);
Obviously you can get an actual connection string from web.config with more code. We also had trouble when specifying a non-default DB name so are using the default here.
Hope this helps.

How to verify if NHibernate hbm.xml are matching with a particular SQL schema?

I had an ASP.NET MVC web application, using NHibernate as ORM on SQL Server 2008 R2. When we deployed to the server, we can update our database any time (some are ad-hoc changes).
The problem is when the database schema change, the application crashed because NHibernate .hbm.xml files are no longer matching with the DB schema.
How do I verify that my current *.hbm.xml file are matching with the database schema ? And how to detect the mismatch early in ASP.NET MVC ?
You can do the checking when application runs, could be in the global asax.
protected void Application_Start()
{
}
The connection string is the key to get the expected schema.
<property name="connection.connection_string">Server=.;Initial Catalog=TheExpectedSchema; ..</property>
First read the expected schema by reading it from nhibernate config and retrieve it from Initial Catalog part (if the database is oracle, probably use the User ID part).
NHibernate.Cfg.Configuration config = ...;
var conStr = config.Properties["connection.connection_string"];
var match = Regex.Match(conStr, "Initial Catalog *= *([^;]*) *");
var expectedSchema = match.Groups[1].Value;
Then read the actual schema by reading it from *.hbm.xml file.
<hibernate-mapping schema="TheActualSchema"
If the files are put under App_Data directory, read each file and use xml document to get the schema.
var appDataDir = new DirectoryInfo(HttpContext.Server.MapPath("~/App_Data"));
var files = appDataDir.GetFiles("*.hbm.xml");
foreach (var file in files)
{
var doc = new XmlDocument();
doc.Load(file.FullName);
var actualSchema = doc.DocumentElement.GetAttribute("schema");
if (actualSchema != expectedSchema)
{
// Proper handling here (an example would be throwing exception).
throw new Exception(string.Format("Expected schema: {0}, actual schema {1}", expectedSchema, actualSchema));
}
}

RavenDB IsOperationAllowedOnDocument not supported in Embedded Mode

RavenDB throws InvalidOperationException when IsOperationAllowedOnDocument is called using embedded mode.
I can see in the IsOperationAllowedOnDocument implementation a clause checking for calls in embedded mode.
namespace Raven.Client.Authorization
{
public static class AuthorizationClientExtensions
{
public static OperationAllowedResult[] IsOperationAllowedOnDocument(this ISyncAdvancedSessionOperation session, string userId, string operation, params string[] documentIds)
{
var serverClient = session.DatabaseCommands as ServerClient;
if (serverClient == null)
throw new InvalidOperationException("Cannot get whatever operation is allowed on document in embedded mode.");
Is there a workaround for this other than not using embedded mode?
Thanks for your time.
I encountered the same situation while writing some unit tests. The solution James provided worked; however, it resulted in having one code path for the unit test and another path for the production code, which defeated the purpose of the unit test. We were able to create a second document store and connect it to the first document store which allowed us to then access the authorization extension methods successfully. While this solution would probably not be good for production code (because creating Document Stores is expensive) it works nicely for unit tests. Here is a code sample:
using (var documentStore = new EmbeddableDocumentStore
{ RunInMemory = true,
UseEmbeddedHttpServer = true,
Configuration = {Port = EmbeddedModePort} })
{
documentStore.Initialize();
var url = documentStore.Configuration.ServerUrl;
using (var docStoreHttp = new DocumentStore {Url = url})
{
docStoreHttp.Initialize();
using (var session = docStoreHttp.OpenSession())
{
// now you can run code like:
// session.GetAuthorizationFor(),
// session.SetAuthorizationFor(),
// session.Advanced.IsOperationAllowedOnDocument(),
// etc...
}
}
}
There are couple of other items that should be mentioned:
The first document store needs to be run with the UseEmbeddedHttpServer set to true so that the second one can access it.
I created a constant for the Port so it would be used consistently and ensure use of a non reserved port.
I encountered this as well. Looking at the source, there's no way to do that operation as written. Not sure if there's some intrinsic reason why since I could easily replicate the functionality in my app by making a http request directly for the same info:
HttpClient http = new HttpClient();
http.BaseAddress = new Uri("http://localhost:8080");
var url = new StringBuilder("/authorization/IsAllowed/")
.Append(Uri.EscapeUriString(userid))
.Append("?operation=")
.Append(Uri.EscapeUriString(operation)
.Append("&id=").Append(Uri.EscapeUriString(entityid));
http.GetStringAsync(url.ToString()).ContinueWith((response) =>
{
var results = _session.Advanced.DocumentStore.Conventions.CreateSerializer()
.Deserialize<OperationAllowedResult[]>(
new RavenJTokenReader(RavenJToken.Parse(response.Result)));
}).Wait();

Changing connection string at runtime for OData/WCF Data Service which uses basic authentication

I have an ODATA services with a single schema. These point to a development database, and is served through a WCF Data Service which is then used by clients running Excel/Powerpivot to fetch their own data for reports and such.
The service is secured at runtime through pretty much the same basic authentication explained here: http://msdn.microsoft.com/en-us/data/gg192997
Now how this needs to work in the live environment is sit on the server and connect to different databases based on the username/password supplied. the Users will be typing in 'username#clientID' and 'password'. 'username#clientID' is then split() and username/password is checked against the SQL database. But the database server URL to check against will be determined by ClientID.
Also, once it is authorized the WCF data service needs to return data from the Database corresponding to the ClientID.
The approach I tried was to modify the connection string in the web.config file, but this doesn't work because it says the file is read-only. I'm not even sure if this would have worked at all. What I need to do is get the EDMX/WCF Data service to return the data from the correct database. Here's what I tried to do:
private static bool TryAuthenticate(string user, string password, out IPrincipal principal)
{
Configuration myWebConfig = System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration("~");
myWebConfig.AppSettings.Settings["test"].Value = "Hello";
myWebConfig.Save();
string newConnStr = myWebConfig.ConnectionStrings.ConnectionStrings["IntelCorpEntities"].ToString();
newConnStr.ToString().Replace("SERGEIX01", "SERVERX01");
myWebConfig.ConnectionStrings.ConnectionStrings["IntelCorpEntities"].ConnectionString = newConnStr;
myWebConfig.Save();
if (user.ToLower().Equals("admin") && password.Equals("password"))
{
principal = new GenericPrincipal(new GenericIdentity(user), new string[] { "Users" });
return true;
}
else
{
principal = null;
return false;
}
}
In your DataService derived class override the CreateDataSource method and in it figure out the right connect string, create a new instance of the EF object context for the connection string and return it.
The WCF DS Service will not use the default constructor on the EF object context then, it's completely up to you construct the instance with the right connection string.
In your svc.cs file add following :
protected override NorthWindEntity CreateDataSource()
{
System.Data.EntityClient.EntityConnection connection = new System.Data.EntityClient.EntityConnection();
connection.ConnectionString = "";
NorthWindEntity ctx = new NorthWindEntity(connection);
return ctx;
}