Named binding - MVC3 - ninject

I'm trying to register to implementations of same interface using named instances
kernel.Bind<IRepository>().To<CachedRepository>().InSingletonScope();
kernel.Bind<IRepository>().To<DbRepository>().InSingletonScope().Named("db");
the idea, is that if I not specify the name then the CachedRepository gets created, if I need a DB oriented one then I'd use the Named attribute, but this miserable fails when a simple object would get created
public class TripManagerController : Controller
{
[Inject]
public IRepository Repository { get; set; } // default Cached repo must be created
public TripManagerController()
{
ViewBag.LogedEmail = "test#test.com";
}
}
the error is
Error activating IRepository More than one matching bindings are
available. Activation path: 2) Injection of dependency IRepository
into parameter repository of constructor of type TripManagerController
1) Request for TripManagerController
Suggestions: 1) Ensure that you have defined a binding for
IRepository only once.
Is there a way to achieve what I want without creating a new interface for BD oriented repositories?
Thx

The [Named] attribute as shown in the wiki should work.
BTW stay away from anything other than ctor injection!

It would seem you cannot do what you're trying, I've just come across the same issue and as well as finding your question I also found this one where the author of Ninject Remo Gloor replied.
https://stackoverflow.com/a/4051391/495964
While Remo didn't explicitly say it couldn't be done his answer was to name both bindings (or use custom attribute binding, amounting the same thing).

Related

Autofac Multitenant Database Configuration

I have a base abstract context which has a couple hundred shared objects, and then 2 "implementation" contexts which both inherit from the base and are designed to be used by different tenants in a .net core application. A tenant object is injected into the constructor for OnConfiguring to pick up which connection string to use.
public abstract class BaseContext : DbContext
{
protected readonly AppTenant Tenant;
protected BaseContext (AppTenant tenant)
{
Tenant = tenant;
}
}
public TenantOneContext : BaseContext
{
public TenantOneContext(AppTenant tenant)
: base(tenant)
{
}
}
In startup.cs, I register the DbContexts like this:
services.AddDbContext<TenantOneContext>();
services.AddDbContext<TenantTwoContext>();
Then using the autofac container and th Multitenant package, I register tenant specific contexts like this:
IContainer container = builder.Build();
MultitenantContainer mtc = new MultitenantContainer(container.Resolve<ITenantIdentificationStrategy>(), container);
mtc.ConfigureTenant("1", config =>
{
config.RegisterType<TenantOneContext>().AsSelf().As<BaseContext>();
});
mtc.ConfigureTenant("2", config =>
{
config.RegisterType<TenantTwoContext>().AsSelf().As<BaseContext>();
});
Startup.ApplicationContainer = mtc;
return new AutofacServiceProvider(mtc);
My service layers are designed around the BaseContext being injected for reuse where possible, and then services which require specific functionality use the TenantContexts.
public BusinessService
{
private readonly BaseContext _baseContext;
public BusinessService(BaseContext context)
{
_baseContext = context;
}
}
In the above service at runtime, I get an exception "No constructors on type 'BaseContext' can be found with the constructor finder 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder'". I'm not sure why this is broken....the AppTenant is definitely created as I can inject it other places successfully. I can make it work if I add an extra registration:
builder.RegisterType<TenantOneContext>().AsSelf().As<BaseContext>();
I don't understand why the above registration is required for the tenant container registrations to work. This seems broken to me; in structuremap (Saaskit) I was able to do this without adding an extra registration, and I assumed using the built in AddDbContext registrations would take care of creating a default registration for the containers to overwrite. Am I missing something here or is this possibly a bug in the multitenat functionality of autofac?
UPDATE:
Here is fully runable repo of the question: https://github.com/danjohnso/testapp
Why is line 66 of Startup.cs needed if I have lines 53/54 and lines 82-90?
As I expected your problem has nothing to do with multitenancy as such. You've implemented it almost entirely correctly, and you're right, you do not need that additional registration, and, btw, these two (below) too because you register them in tenant's scopes a bit later:
services.AddDbContext<TenantOneContext>();
services.AddDbContext<TenantTwoContext>();
So, you've made only one very small but very important mistake in TenantIdentitifcationStrategy implementation. Let's walk through how you create container - this is mainly for other people who may run into this problem as well. I'll mention only relevant parts.
First, TenantIdentitifcationStrategy gets registered in a container along with other stuff. Since there's no explicit specification of lifetime scope it is registered as InstancePerDependency() by default - but that does not really matter as you'll see. Next, "standard" IContainer gets created by autofac's buider.Build(). Next step in this process is to create MultitenantContainer, which takes an instance of ITenantIdentitifcationStrategy. This means that MultitenantContainer and its captive dependency - ITenantIdentitifcationStrategy - will be singletons regardless of how ITenantIdentitifcationStrategy is registered in container. In your case it gets resolved from that standard "root" container in order to manage its dependencies - well, this is what autofac is for anyways. Everything is fine with this approach in general, but this is where your problem actually begins. When autofac resolves this instance it does exactly what it is expected to do - injects all the dependencies into TenantIdentitifcationStrategy's constructor including IHttpContextAccessor. So, right there in the constructor you grab an instance of IHttpContext from that context accessor and store it for using in tenant resolution process - and this is a fatal mistake: there's no http request at this time, and since TenantIdentitifcationStrategy is a singleton it means that there will not ever be one for it! So, it gets null request context for the whole application lifespan. This effectively means that TenantIdentitifcationStrategy will not be able to resolve tenant identifier based on http requests - because it does not actually analyze them. Consequently, MultitenantContainer will not be able to resolve any tenant-specific services.
Now when the problem is clear, its solution is obvious and trivial - just move fetching of request context context = _httpContextAccessor.HttpContext to TryIdentifyTenant() method. It gets called in the proper context and will be able to access request context and analyze it.
PS. This digging has been highly educational for me since I had absolutely no idea about autofac's multi-tenant concept, so thank you very much for such an interesting question! :)
PPS. And one more thing: this question is just a perfect example of how important well prepared example is. You provided very good example. Without it no one would be able to figure out what the problem is since the most important part of it was not presented in the question - and sometimes you just don't know where this part actually is...

Configuring DI container for global filters with services in their constructors

I have a site using SimpleInjector and MVC, and I'm trying to determine where I'm going wrong architecturally.
I have my DI container being set up:
public static class DependencyConfig
{
private static Container Container { get; set; }
public static void RegisterDependencies(HttpConfiguration configuration)
{
*snip*
FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters, Container);
}
}
And my RegisterGlobalFilters looks like this:
public static class FilterConfig
{
public static void RegisterGlobalFilters(GlobalFilterCollection filters, Container container)
{
filters.Add(new HandleErrorAttribute());
filters.Add(container.GetInstance<OrderItemCountActionFilterAttribute>());
if (container.GetInstance<ISiteConfiguration>().ConfiguredForExternalOrders)
{
filters.Add(container.GetInstance<StoreGeolocationActionFilterAttribute>());
}
filters.Add(container.GetInstance<StoreNameActionFilterAttribute>());
}
}
The store can take orders (through this website) at in-store kiosks or online from home. External orders would need to geolocate to display information to the customer regarding their closest store. But this means I have to use the container as a service locator in my global filters, which means I have to hide the call to the global filters in my DI container. This all seems to me like an anti-pattern or that there should be a better way to do this.
There is no real issue with the way you are configuring the container and calling the container to resolve the Filter instances, as long as all of this work is being done in the composition root.
The underlying problem as I see it is using Attributes in manner they were not intended to be used. Useful reads on this subject are Steven's post on Dependency Injection in Attributes: don’t do it! and Mark Seemann's post on Passive Attributes. If you were to follow the suggestion in these posts I think you'd find you end up with code you are much happier with.
Also see this recent question raised by Steven here regarding the singleton nature of MVC attributes.
After a bit of discussion with a system architect, we came to the (embarrassingly simple) conclusion that the best answer for our architecture would be to create two Register functions in our DI container - one called RegisterCorporateWebSiteDependencies() and another RegisterStoreWebsiteDependencies().
The natural extension of that is to also have 2 global filter configs called after dependency composition, (again) one for RegisterCorporateGlobalFilters() and one for RegisterStoreGlobalFilters().
This results in one overall if statement running the registers ex:
if (Convert.ToBoolean(ConfigurationManager.AppSettings["IsCorporate"]))
{
DependencyConfig.RegisterCorporateWebSiteDependencies(GlobalConfiguration.Configuration);
}
else
{
DependencyConfig.RegisterStoreWebSiteDependencies(GlobalConfiguration.Configuration);
}
Which is much more straightforward, and removes the logic from the other locations where it can be confusing.

How to properly construct dependent objects manually?

I'm using Ninject.Web.Common and I really like Ninject so far. I'm not used to dependency injection yet so I've got a pretty lame question I can't however google and answer to so far.
Suppose I have a Message Handler which depends on my IUnitOfWork implementation. I need to construct an instance of my handler to add it to Web API config. I've managed to achieve this using the following code:
var resolver = GlobalConfiguration.Configuration.DependencyResolver;
config.MessageHandlers.Add((myproject.Filters.ApiAuthHandler)resolver.GetService(typeof(myproject.Filters.ApiAuthHandler)));
I really dislike typing this kind of stuff so I'm wondering if I'm doing it right. What's the common way of constructing dependent objects manually?
Well I use dependency injection in real world projects only half a year ago, so I'm a pretty new to this stuff. I would really recommend the Dependency Injection in .NET book, where all the concepts are described pretty well and I learned a lot from it.
So far for me what worked the best is overwriting the default controller factory like this:
public class NinjectControllerFactory : DefaultControllerFactory
{
private IKernel _kernel;
public NinjectControllerFactory()
{
_kernel= new StandardKernel();
ConfigureBindings();
}
protected override IController GetControllerInstance(RequestContext requestContext,
Type controllerType)
{
return controllerType == null
? null
: (IController)_kernel.Get(controllerType);
}
private void ConfigureBindings()
{
_kernel.Bind<IUnitOfWork>().To<MessageHandler>();
}
}
And in the Global.asax in the Application_Start function you just have to add this line:
ControllerBuilder.Current.SetControllerFactory(new NinjectControllerFactory());
This approach is called the composition root pattern and considered the "good" way for dependency injection.
What I would recommend as well that if you have multiple endpoints like services and other workers as well you should create an Application.CompositionRoot project and handle there the different binding configuration for the different endpoints for your application.

IQueryable Repository with StructureMap (IoC) - How do i Implement IDisposable?

If i have the following Repository:
public IQueryable<User> Users()
{
var db = new SqlDataContext();
return db.Users;
}
I understand that the connection is opened only when the query is fired:
public class ServiceLayer
{
public IRepository repo;
public ServiceLayer(IRepository injectedRepo)
{
this.repo = injectedRepo;
}
public List<User> GetUsers()
{
return repo.Users().ToList(); // connection opened, query fired, connection closed. (or is it??)
}
}
If this is the case, do i still need to make my Repository implement IDisposable?
The Visual Studio Code Metrics certainly think i should.
I'm using IQueryable because i give control of the queries to my service layer (filters, paging, etc), so please no architectural discussions over the fact that im using it.
BTW - SqlDataContext is my custom class which extends Entity Framework's ObjectContext class (so i can have POCO parties).
So the question - do i really HAVE to implement IDisposable?
If so, i have no idea how this is possible, as each method shares the same repository instance.
EDIT
I'm using Depedency Injection (StructureMap) to inject the concrete repository into the service layer. This pattern is followed down the app stack - i'm using ASP.NET MVC and the concrete service is injected into the Controllers.
In other words:
User requests URL
Controller instance is created, which receives a new ServiceLayer instance, which is created with a new Repository instance.
Controller calls methods on service (all calls use same Repository instance)
Once request is served, controller is gone.
I am using Hybrid mode to inject dependencies into my controllers, which according to the StructureMap documentation cause the instances to be stored in the HttpContext.Current.Items.
So, i can't do this:
using (var repo = new Repository())
{
return repo.Users().ToList();
}
As this defeats the whole point of DI.
A common approach used with nhibernate is to create your session (ObjectContext) in begin_request (or some other similar lifecycle event) and then dispose it in end_request. You can put that code in an HttpModule.
You would need to change your Repository so that it has the ObjectContext injected. Your Repository should get out of the business of managing the ObjectContext lifecycle.
I would say you definitely should. Unless Entity Framework handles connections very differently than LinqToSql (which is what I've been using), you should implement IDisposable whenever you are working with connections. It might be true that the connection automatically closes after your transaction successfully completes. But what happens if it doesn't complete successfully? Implementing IDisposable is a good safeguard for making sure you don't have any connections left open after your done with them. A simpler reason is that it's a best practice to implement IDisposable.
Implementation could be as simple as putting this in your repository class:
public void Dispose()
{
SqlDataContext.Dispose();
}
Then, whenever you do anything with your repository (e.g., with your service layer), you just need to wrap everything in a using clause. You could do several "CRUD" operations within a single using clause, too, so you only dispose when you're all done.
Update
In my service layer (which I designed to work with LinqToSql, but hopefully this would apply to your situation), I do new up a new repository each time. To allow for testability, I have the dependency injector pass in a repository provider (instead of a repository instance). Each time I need a new repository, I wrap the call in a using statement, like this.
using (var repository = GetNewRepository())
{
...
}
public Repository<TDataContext, TEntity> GetNewRepository()
{
return _repositoryProvider.GetNew<TDataContext, TEntity>();
}
If you do it this way, you can mock everything (so you can test your service layer in isolation), yet still make sure you are disposing of your connections properly.
If you really need to do multiple operations with a single repository, you can put something like this in your base service class:
public void ExecuteAndSave(Action<Repository<TDataContext, TEntity>> action)
{
using (var repository = GetNewRepository())
{
action(repository);
repository.Save();
}
}
action can be a series of CRUD actions or a complex query, but you know if you call ExecuteAndSave(), when it's all done, you're repository will be disposed properly.
EDIT - Advice Received From Ayende Rahien
Got an email reply from Ayende Rahien (of Rhino Mocks, Raven, Hibernating Rhinos fame).
This is what he said:
You problem is that you initialize
your context like this:
_genericSqlServerContext = new GenericSqlServerContext(new
EntityConnection("name=EFProfDemoEntities"));
That means that the context doesn't
own the entity connection, which means
that it doesn't dispose it. In
general, it is vastly preferable to
have the context create the
connection. You can do that by using:
_genericSqlServerContext = new GenericSqlServerContext("name=EFProfDemoEntities");
Which definetely makes sense - however i would have thought that Disposing of a SqlServerContext would also dispose of the underlying connection, guess i was wrong.
Anyway, that is the solution - now everything is getting disposed of properly.
So i no longer need to do using on the repository:
public ICollection<T> FindAll<T>(Expression<Func<T, bool>> predicate, int maxRows) where T : Foo
{
// dont need this anymore
//using (var cr = ObjectFactory.GetInstance<IContentRepository>())
return _fooRepository.Find().OfType<T>().Where(predicate).Take(maxRows).ToList();
And in my base repository, i implement IDisposable and simply do this:
Context.Dispose(); // Context is an instance of my custom sql context.
Hope that helps others out.

Why does NHibernate need non-settable members to be virtual?

NHibernate requires not only settable properties of your domain to be virtual but also get-only properties and methods. Does anyone know what the reason for this is?
I cannot imagine a possible use.
The reason is lazy loading. In order to make lazy loading possible, a proxy class is created. It must intercept every call from "outside" in order to load your entity before actual method/property is executed. If some methods/properties were not virtual it would be impossible to intercept these calls and entity wouldn't be loaded.
I'm not an NHibernate expert, but from reading Oren's blogs, and from what I've learned of NH, the basic pattern of use is to proxy the objects for the ORM. This means, among other things, that the only things you'll be able to map are going to be things that are made virtual, otherwise NH would have to use a different strategy to redefine the implementations under the hood.
Because you may want to access your settable properties from there, and maybe in some fancy indirect or reflection way. So it's to be 100% sure that when your entity is used, it's initialized.
Example:
public string GetSmth
{
get
{
// NHibernate will not know that you access this field.
return _name;
}
}
private string _name;
public virtual string Name { get { return _name; } set { _name = value; } }
Here's Ayende explaing this in relation to Entity Framework:
http://ayende.com/Blog/archive/2009/05/29/why-defer-loading-in-entity-framework-isnrsquot-going-to-work.aspx
AddProduct is a non virtual method
call, so it cannot be intercepted.
Accessing the _products field also
cannot be intercepted.
The only reason I see why one would want method execution without messing with NH proxy (i.e. loading data) is when you have method that do not access your class' data. But in this case, if this method does not use your class' data, what does it belong to that class at all?