I have a base abstract context which has a couple hundred shared objects, and then 2 "implementation" contexts which both inherit from the base and are designed to be used by different tenants in a .net core application. A tenant object is injected into the constructor for OnConfiguring to pick up which connection string to use.
public abstract class BaseContext : DbContext
{
protected readonly AppTenant Tenant;
protected BaseContext (AppTenant tenant)
{
Tenant = tenant;
}
}
public TenantOneContext : BaseContext
{
public TenantOneContext(AppTenant tenant)
: base(tenant)
{
}
}
In startup.cs, I register the DbContexts like this:
services.AddDbContext<TenantOneContext>();
services.AddDbContext<TenantTwoContext>();
Then using the autofac container and th Multitenant package, I register tenant specific contexts like this:
IContainer container = builder.Build();
MultitenantContainer mtc = new MultitenantContainer(container.Resolve<ITenantIdentificationStrategy>(), container);
mtc.ConfigureTenant("1", config =>
{
config.RegisterType<TenantOneContext>().AsSelf().As<BaseContext>();
});
mtc.ConfigureTenant("2", config =>
{
config.RegisterType<TenantTwoContext>().AsSelf().As<BaseContext>();
});
Startup.ApplicationContainer = mtc;
return new AutofacServiceProvider(mtc);
My service layers are designed around the BaseContext being injected for reuse where possible, and then services which require specific functionality use the TenantContexts.
public BusinessService
{
private readonly BaseContext _baseContext;
public BusinessService(BaseContext context)
{
_baseContext = context;
}
}
In the above service at runtime, I get an exception "No constructors on type 'BaseContext' can be found with the constructor finder 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder'". I'm not sure why this is broken....the AppTenant is definitely created as I can inject it other places successfully. I can make it work if I add an extra registration:
builder.RegisterType<TenantOneContext>().AsSelf().As<BaseContext>();
I don't understand why the above registration is required for the tenant container registrations to work. This seems broken to me; in structuremap (Saaskit) I was able to do this without adding an extra registration, and I assumed using the built in AddDbContext registrations would take care of creating a default registration for the containers to overwrite. Am I missing something here or is this possibly a bug in the multitenat functionality of autofac?
UPDATE:
Here is fully runable repo of the question: https://github.com/danjohnso/testapp
Why is line 66 of Startup.cs needed if I have lines 53/54 and lines 82-90?
As I expected your problem has nothing to do with multitenancy as such. You've implemented it almost entirely correctly, and you're right, you do not need that additional registration, and, btw, these two (below) too because you register them in tenant's scopes a bit later:
services.AddDbContext<TenantOneContext>();
services.AddDbContext<TenantTwoContext>();
So, you've made only one very small but very important mistake in TenantIdentitifcationStrategy implementation. Let's walk through how you create container - this is mainly for other people who may run into this problem as well. I'll mention only relevant parts.
First, TenantIdentitifcationStrategy gets registered in a container along with other stuff. Since there's no explicit specification of lifetime scope it is registered as InstancePerDependency() by default - but that does not really matter as you'll see. Next, "standard" IContainer gets created by autofac's buider.Build(). Next step in this process is to create MultitenantContainer, which takes an instance of ITenantIdentitifcationStrategy. This means that MultitenantContainer and its captive dependency - ITenantIdentitifcationStrategy - will be singletons regardless of how ITenantIdentitifcationStrategy is registered in container. In your case it gets resolved from that standard "root" container in order to manage its dependencies - well, this is what autofac is for anyways. Everything is fine with this approach in general, but this is where your problem actually begins. When autofac resolves this instance it does exactly what it is expected to do - injects all the dependencies into TenantIdentitifcationStrategy's constructor including IHttpContextAccessor. So, right there in the constructor you grab an instance of IHttpContext from that context accessor and store it for using in tenant resolution process - and this is a fatal mistake: there's no http request at this time, and since TenantIdentitifcationStrategy is a singleton it means that there will not ever be one for it! So, it gets null request context for the whole application lifespan. This effectively means that TenantIdentitifcationStrategy will not be able to resolve tenant identifier based on http requests - because it does not actually analyze them. Consequently, MultitenantContainer will not be able to resolve any tenant-specific services.
Now when the problem is clear, its solution is obvious and trivial - just move fetching of request context context = _httpContextAccessor.HttpContext to TryIdentifyTenant() method. It gets called in the proper context and will be able to access request context and analyze it.
PS. This digging has been highly educational for me since I had absolutely no idea about autofac's multi-tenant concept, so thank you very much for such an interesting question! :)
PPS. And one more thing: this question is just a perfect example of how important well prepared example is. You provided very good example. Without it no one would be able to figure out what the problem is since the most important part of it was not presented in the question - and sometimes you just don't know where this part actually is...
Related
Objective
Create a asp.net core based solution that permits plugins loaded in runtime, way after IServiceCollection/IServiceProvider have been locked down to change.
Issue
IServiceCollection is configured at startup, from which IServiceProvider is developed, then both are locked for change before run is started.
I'm sure there are great reasons to do this....but I rue the day they came up with it being the only way to do things... so:
Attempt #1
Was based on using Autofac's ability to make child containers, falling back to parent containers for whatever is not specific to the child container,
where, right after uploading the new plugin, I create a new ILifetimeScope so that I can add Services given its containerBuilder:
moduleLifetimeScope = _lifetimeScope.BeginLifetimeScope(autoFacContainerBuilder =>
{
//can add services now
autoFacContainerBuilder.AddSingleton(serviceType, tInterface);
}
save the scope and its Container in a dictionary, against controllerTypes found in the dll, so that:
later can use a custom implementation of IControllerActivator to first try with the default IServiceProvider before falling back to try in the child plugin's child container.
The upside was, Holy cow, with a bit of hacking around, slowly got Controllers to work, then DI into Controllers, then OData....
The downside was that its custom to a specific DI library, and the Startup extensions (AddDbContext, AddOData) were not available as autoFacContainerBuilder doesn't implement IServiceCollection, so it became a huge foray into innards...that sooner or later couldn't keep on being pushed uphill (eg: couldn't figure out how to port AddDbContext)
Attempts #2
At startup, save a singleton copy of the original ISourceCollectionin theISourceCollection` (to easily re-get it later)
Later, upon loading a new plugin,
Clone the original ISourceCollection
Add to the clonedServiceCollection new Plugin Services/Controllers found in by Reflection
Use standard extension methods to AddDbContext and AddOData, etc.
Use a custom implementation of IControllerActivator as per above, falling back to the child IServiceProvider
Holy cow. Controllers work, OData works, DbContext works...
Hum...it's not working perfectly. Whereas the Controllers and being created new on every request, it's the same DbContext every time, because it's not being disposed, because it's not scoped by some form of scopefactory.
Attempt #3
Same thing as #2, but instead of making the IServiceProvider when the module is loaded, now -- in the custom IControllerActivator making a new IServiceProvider on each request.
No idea how much memory/time this is wasting, but I'm guessing its ...not brilliant
But sure...but I've really just pushed the problem a bit further along, not gotten rid of it:
A new IServiceProvider is being created...but nothing is actually disposing of it either.
backed by the fact that I'm watching memory usage increase slowly but surely....
Attempt #4
Same as above, but instead of creating a new IServiceProvider on every request, I'm keeping the IServiceProvider that i first built when I uploaded the module, but
using it to built a new Scope, and get its nested IServiceProvider,
hold on to the scope for later disposal.
It's a hack as follows:
public class AppServiceBasedControllerActivator : IControllerActivator {
public object Create(ControllerContext actionContext)
{
...
find the cached (ControllerType->module Service Provider)
...
var scope = scopeDictionaryEntry.ServiceProvider.CreateScope();
httpController = serviceProvider.GetService(controllerType);
actionContext.HttpContext.Items["SAVEMEFROMME"] = scope;
return httpController;
}
public virtual void Release(ControllerContext context, object controller)
{
var scope = context.HttpContext.Items["SAVEMEFROMME"] as IServiceScope;
if (scope == null){return;}
context.HttpContext.Items.Remove("SAVEMEFROMME");
scope.Dispose(); //Memory should go back down..but doesn't.
}
}
}
Attempt #5
No idea. Hence this Question.
I feel like I'm a little further along...but just not closing the chasm to success.
What would you suggest to permit this, in a memory safe way?
Background Musings/Questions in case it helps?
As I understand it, the default IServiceProvider doesn't have a notion of child lifespan/containers, like Autofac can create.
I see a IServiceScopeFactory makes a new IServiceProvider.
I understand there is some middleware (what name?) that invokes IServiceScopeFactory to make a IServiceProvider on every single request (correct?)
are these per-request IServiceProviders really separate/duplicate, and don't 'descend' from a parent one and falls back to parent if a asked for a singleton?
What is the Middleware doing different to dispose/reduce memory at the end of the call?
Should I be thinking about replacing the middleware? But even if it could -- it's so early that I only would have an url, not yet a Controller Type, therefore don't know what Plugin Assembly the Controller came from, therefore don't know what IServiceProvider to use for it...therefore too early to be of use?
Thank you
Getting a real grip on adding plugin sourced scoped services/controllers/DbContexts would be...wow. Been looking for this capability for several months now.
Thanks.
Other Posts
some similarity to:
Use custom IServiceProvider implementation in asp.net core
but I don't see how his disposing is any different to what I'm doing, so are they too having memory issues?
I am in the process of migrating NServiceBus up to v6 and am at a roadblock in the process of removing reference to IBus.
We build upon a common library for many of our applications (Website, Micro Services etc) and this library has the concept of IEventPublisher which is essentially a Send and Publish interface. This library has no knowledge of NSB.
We can then supply the implementation of this IEventPublisher using DI from the application, this allows the library's message passing to be replaced with another technology very easily.
So what we end up with is an implementation similar to
public class NsbEventPublisher : IEventPublisher
{
IEndpointInstance _instance;
public NsbEventPublisher(IEndpointInstance endpoint)
{
instance = endpoint;
}
public void Send(object message)
{
instance.Send(message, sendOptions);
}
public void Publish(object message)
{
instance.Publish(message, sendOptions);
}
}
This is a simplification of what actually happens but illustrates my problem.
Now when the DI container is asked for an IEventPublisher it knows to return a NsbEventPublisher and it knows to resolve the IEndpointInstance as we bind this in the bootstrapper for the website to the container as a singleton.
All is fine and my site runs perfect.
I am now migrating the micro-services (running in NSB.Host) and the DI container is refusing to resolve IEndpointInstance when resolving the dependencies within a message handler. Reading the docs this is intentional and I should be using IMessageHandlerContext when in a message handler.
https://docs.particular.net/nservicebus/upgrades/5to6/moving-away-from-ibus
The docs even elude to the issue I have in the bottom example around the class MyContextAccessingDependency. The suggestion is to pass the message context through the method which puts a hard dependency on the code running in the context of a message handler.
What I would like to do is have access to a sender/publisher and the DI container can give me the correct implementation. The code does not need any concept of the caller and if it was called from a message handler or from a self hosted application that just wants to publish.
I see that there is two interfaces for communicating with the "Bus" IPipelineContext and IMessageSession which IMessageHandlerContext and IEndpointInstance interfaces extend respectively.
What I am wondering is there some unification of the two interfaces that gets bound by NSB into the container so I can accept an interface that sends/publishes messages. In a handler it is an IMessageHandlerContext and on my self hosted application the IEndPointInstance.
For now I am looking to change my implementation of IEventPublisher depending on application hosting. I was just hoping there might be some discussion about how this approach is modeled without a reliable interface to send/publish irrespective of what initiated the execution of the code path.
A few things to note before I get to the code:
The abstraction over abstraction promise, never works. I have never seen the argument of "I'm going to abstract ESB/Messaging/Database/ORM so that I can swap it in future" work. ever.
When you abstract message sending functionality like that, you'll lose some of the features the library provides. In this case, you can't perform 'Conversations' or use 'Sagas' which would hinder your overall experience, e.g. when using monitoring tools and watching diagrams in ServiceInsight, you won't see the whole picture but only nugets of messages passing through the system.
Now in order to make that work, you need to register IEndpointInstance in your container when your endpoint starts up. Then that interface can be used in your dependency injection e.g. in NsbEventPublisher to send the messages.
Something like this (depending which IoC container you're using, here I assume Autofac):
static async Task AsyncMain()
{
IEndpointInstance endpoint = null;
var builder = new ContainerBuilder();
builder.Register(x => endpoint)
.As<IEndpointInstance>()
.SingleInstance();
//Endpoint configuration goes here...
endpoint = await Endpoint.Start(busConfiguration)
.ConfigureAwait(false);
}
The issues with using IEndpointInstance / IMessageSession are mentioned here.
If i have the following Repository:
public IQueryable<User> Users()
{
var db = new SqlDataContext();
return db.Users;
}
I understand that the connection is opened only when the query is fired:
public class ServiceLayer
{
public IRepository repo;
public ServiceLayer(IRepository injectedRepo)
{
this.repo = injectedRepo;
}
public List<User> GetUsers()
{
return repo.Users().ToList(); // connection opened, query fired, connection closed. (or is it??)
}
}
If this is the case, do i still need to make my Repository implement IDisposable?
The Visual Studio Code Metrics certainly think i should.
I'm using IQueryable because i give control of the queries to my service layer (filters, paging, etc), so please no architectural discussions over the fact that im using it.
BTW - SqlDataContext is my custom class which extends Entity Framework's ObjectContext class (so i can have POCO parties).
So the question - do i really HAVE to implement IDisposable?
If so, i have no idea how this is possible, as each method shares the same repository instance.
EDIT
I'm using Depedency Injection (StructureMap) to inject the concrete repository into the service layer. This pattern is followed down the app stack - i'm using ASP.NET MVC and the concrete service is injected into the Controllers.
In other words:
User requests URL
Controller instance is created, which receives a new ServiceLayer instance, which is created with a new Repository instance.
Controller calls methods on service (all calls use same Repository instance)
Once request is served, controller is gone.
I am using Hybrid mode to inject dependencies into my controllers, which according to the StructureMap documentation cause the instances to be stored in the HttpContext.Current.Items.
So, i can't do this:
using (var repo = new Repository())
{
return repo.Users().ToList();
}
As this defeats the whole point of DI.
A common approach used with nhibernate is to create your session (ObjectContext) in begin_request (or some other similar lifecycle event) and then dispose it in end_request. You can put that code in an HttpModule.
You would need to change your Repository so that it has the ObjectContext injected. Your Repository should get out of the business of managing the ObjectContext lifecycle.
I would say you definitely should. Unless Entity Framework handles connections very differently than LinqToSql (which is what I've been using), you should implement IDisposable whenever you are working with connections. It might be true that the connection automatically closes after your transaction successfully completes. But what happens if it doesn't complete successfully? Implementing IDisposable is a good safeguard for making sure you don't have any connections left open after your done with them. A simpler reason is that it's a best practice to implement IDisposable.
Implementation could be as simple as putting this in your repository class:
public void Dispose()
{
SqlDataContext.Dispose();
}
Then, whenever you do anything with your repository (e.g., with your service layer), you just need to wrap everything in a using clause. You could do several "CRUD" operations within a single using clause, too, so you only dispose when you're all done.
Update
In my service layer (which I designed to work with LinqToSql, but hopefully this would apply to your situation), I do new up a new repository each time. To allow for testability, I have the dependency injector pass in a repository provider (instead of a repository instance). Each time I need a new repository, I wrap the call in a using statement, like this.
using (var repository = GetNewRepository())
{
...
}
public Repository<TDataContext, TEntity> GetNewRepository()
{
return _repositoryProvider.GetNew<TDataContext, TEntity>();
}
If you do it this way, you can mock everything (so you can test your service layer in isolation), yet still make sure you are disposing of your connections properly.
If you really need to do multiple operations with a single repository, you can put something like this in your base service class:
public void ExecuteAndSave(Action<Repository<TDataContext, TEntity>> action)
{
using (var repository = GetNewRepository())
{
action(repository);
repository.Save();
}
}
action can be a series of CRUD actions or a complex query, but you know if you call ExecuteAndSave(), when it's all done, you're repository will be disposed properly.
EDIT - Advice Received From Ayende Rahien
Got an email reply from Ayende Rahien (of Rhino Mocks, Raven, Hibernating Rhinos fame).
This is what he said:
You problem is that you initialize
your context like this:
_genericSqlServerContext = new GenericSqlServerContext(new
EntityConnection("name=EFProfDemoEntities"));
That means that the context doesn't
own the entity connection, which means
that it doesn't dispose it. In
general, it is vastly preferable to
have the context create the
connection. You can do that by using:
_genericSqlServerContext = new GenericSqlServerContext("name=EFProfDemoEntities");
Which definetely makes sense - however i would have thought that Disposing of a SqlServerContext would also dispose of the underlying connection, guess i was wrong.
Anyway, that is the solution - now everything is getting disposed of properly.
So i no longer need to do using on the repository:
public ICollection<T> FindAll<T>(Expression<Func<T, bool>> predicate, int maxRows) where T : Foo
{
// dont need this anymore
//using (var cr = ObjectFactory.GetInstance<IContentRepository>())
return _fooRepository.Find().OfType<T>().Where(predicate).Take(maxRows).ToList();
And in my base repository, i implement IDisposable and simply do this:
Context.Dispose(); // Context is an instance of my custom sql context.
Hope that helps others out.
Does anyone have any tips or best practices regarding how Autofac can help manage the NHibernate ISession Instance (in the case of an ASP.NET MVC application)?
I'm not overly familiar with how NHibernate sessions should be handled. That said, Autofac have excellent instance lifetime handling (scoping and deterministic disposal). Some related resources are this article and this question. Since you're in ASP.Net MVC land make sure you also look into the MVC integration stuff.
To illustrate the point, here's a quick sample on how you can use Autofac factory delegates and the Owned generic to get full control over instance lifetime:
public class SomeController
{
private readonly Func<Owned<ISession>> _sessionFactory;
public SomeController(Func<Owned<ISession>> sessionFactory)
{
_sessionFactory = sessionFactory;
}
public void DoSomeWork()
{
using (var session = _sessionFactory())
{
var transaction = session.Value.BeginTransaction();
....
}
}
}
The container setup to get this to work is quite simple. Notice that we don't have to do anything to get the Func<> and Owned<> types, these are made available automatically by Autofac:
builder.Register(c => cfg.BuildSessionFactory())
.As<ISessionFactory>()
.SingleInstance();
builder.Register(c => c.Resolve<ISessionFactory>().OpenSession());
Update: my reasoning here is that, according to this NHibernate tutorial, the lifetime of the session instance should be that of the "unit of work". Thus we need some way of controlling both when the session instance is created and when the session is disposed.
With Autofac we get this control by requesting a Func<> instead of the type directly. Not using Func<> would require that the session instance be created upfront before the controller instance is created.
Next, the default in Autofac is that instances have the lifetime of their container. Since we know that we need the power to dispose this instance as soon as the unit of work is done, we request an Owned instance. Disposing the owned instance will in this case immediately dispose the underlying session.
Edit: Sounds like Autofac and probably other containers can scope the lifetime correctly. If that's the case, go for it.
It isn't a good idea to use your IoC container to manage sessions directly. The lifetime of your session should correspond to your unit of work (transaction boundary). In the case of a web application, that should almost certainly be the lifetime of a web request.
The most common way to achieve this is with an HttpModule that both creates your session and starts your transaction when a request begins, then commits when the request has finished. I would have the HttpModule register the session in the HttpContext.Items collection.
In your IoC container, you could register something like HttpContextSessionLocator against ISessionLocator.
I should mention that your generic error handling should locate the current session and roll back the transaction automatically, or you could end up committing half a unit of work.
I'm trying to embrace widespread dependency injection/IoC. As I read more and more about the benefits I can certainly appreciate them, however I am concerned that in some cases that embracing the dependency injection pattern might lead me to create flexibility at the expense of being able to limit risk by encapsulating controls on what the system is capable of doing and what mistakes I or another programmer on the project are capable of making. I suspect I'm missing something in the pattern that addresses my concerns and am hoping someone can point it out.
Here's a simplified example of what concerns me. Suppose I have a method NotifyAdmins on a Notification class and that I use this method to distribute very sensitive information to users that have been defined as administrators in the application. The information might be distributed by fax, email, IM, etc. based on user-defined settings. This method needs to retrieve a list of administrators. Historically, I would encapsulate building the set of administrators in the method with a call to an AdminSet class, or a call to a UserSet class that asks for a set of user objects that are administrators, or even via direct call(s) to the database. Then, I can call the method Notification.NotifyAdmins without fear of accidentally sending sensitive information to non-administrators.
I believe dependency injection calls for me to take an admin list as a parameter (in one form or another). This does facilitate testing, however, what's to prevent me from making a foolish mistake in calling code and passing in a set of NonAdmins? If I don't inject the set, I can only accidentally email the wrong people with mistakes in one or two fixed places. If I do inject the set aren't I exposed to making this mistake everywhere I call the method and inject the set of administrators? Am I doing something wrong? Are there facilities in the IoC frameworks that allow you to specify these kinds of constraints but still use dependency injection?
Thanks.
You need to reverse your thinking.
If you have a service/class that is supposed to mail out private information to admins only, instead of passing a list of admins to this service, instead you pass another service from which the class can retrieve the list of admins.
Yes, you still have the possibility of making a mistake, but this code:
AdminProvider provider = new AdminProvider();
Notification notify = new Notification(provider);
notify.Execute();
is harder to get wrong than this:
String[] admins = new String[] { "joenormal#hotmail.com" };
Notification notify = new Notification(admins);
notify.Execute();
In the first case, the methods and classes involved would clearly be named in such a way that it would be easy to spot a mistake.
Internally in your Execute method, the code might look like this:
List<String> admins = _AdminProvider.GetAdmins();
...
If, for some reason, the code looks like this:
List<String> admins = _AdminProvider.GetAllUserEmails();
then you have a problem, but that should be easy to spot.
No, dependency injection does not require you to pass the admin list as a parameter. I think you are slightly misunderstanding it. However, in your example, it would involve you injecting the AdminSet instance that your Notification class uses to build its admin list. This would then enable you to mock out this object to test the Notification class in isolation.
Dependencies are generally injected at the time a class is instantiated, using one of these methods: constructor injection (passing dependent class instances in the class's constructor), property injecion (setting the dependent class instances as properties) or something else (e.g. making all injectable objects implement a particular interface that allows the IOC container to call a single method that injects its dependencies. They are not generally injected into each method call as you suggest.
Other good answers have already been given, but I'd like to add this:
You can be both open for extensibility (following the Open/Closed Principle) and still protect sensitive assets. One good way is by using the Specification pattern.
In this case, you could pass in a completely arbitrary list of users, but then filter those users by an AdminSpecification so that only Administrators recieve the notification.
Perhaps your Notification class would have an API similar to this:
public class Notification
{
private readonly string message;
public Notification(string message)
{
this.message = message;
this.AdminSpecification = new AdminSpecification();
}
public ISpecification AdminSpecification { get; set; }
public void SendTo(IEnumerable users)
{
foreach(var u in users.Where(this.AdminSpecification.IsSatisfiedBy))
{
this.Notify(u);
}
}
// more members
}
You can still override the filtering behavior for testing-purposes by assigning a differet Specification, but the default value is secure, so you would be less likely to make mistakes with this API.
For even better protection, you could wrap this whole implementation behind a Facade interface.