Multi-tenant ASP.Net MVC application - Where to store tenant specific data - asp.net-mvc-4

I'm working on a requirement to change an existing ASP.NET MVC application to become multi-tenant ready. The application was built for "only one customer" by other means, for each client there's a new installation of the MVC app. The application has its database structure prepared to have "multi" websites inside one MVC app, so all the database queries already take the "site" into consideration (siteId).
I have several questions regarding multi-tenancy applications and I'm still studying it. Today I started doing changes on the MVC app and I came across on one thing. The application has a table with several configurations. Things like AppSMTPServer, AppShowLoginBox and etc. These are parameters created to make the app dynamic.
All these configurations are currently stored in the ApplicationState inside a static class, something like this:
public static IDictionary<String, String> Configurations
{
get
{
if (HttpContext.Current.Application[CONFIGURATIONS] == null)
{
LoadConfiguration();
}
return (IDictionary<String, String>)HttpContext.Current.Application[CONFIGURATIONS];
}
private set
{
HttpContext.Current.Application[CONFIGURATIONS] = value;
}
}
My question is. If I change the MVC to become multi-tenant ready, each tenant will have its own configuration values. So, I cannot store them in the ApplicationState anymore as it will be populated on application_start and will stay there for good.
What are the options for storing tenant specific configuration data? I looked on several sites and couldn't find a "good practices" on this. If I missed something that would help, please leave a comment. Thanks!

In my experience in building multi-tenant app's this use-case can be handled as follows,
Data remains in the Db
Upon a tenant login, we might require their config values, we can fetch from the db store and add them to a cache [redis - distributed cache]
similarly for each tenant hit, we can cache them, this way as the application is being repeatedly used, the more static data goes in to the cache and lesser the load on the app and the cache and higher the response times

Related

Multi-tenant .Net Core Web API

I have a requirement to build a Web API for an existing system. There are various agencies throughout the country, and each agency has its own database. All databases are on one single server. All databases are identical in structure. All databases have their own username and password. An agency has one or more users. A user can belong to one or more agencies. There is also one special database which contains a table of all users, a table of all agencies, and user-agencies bridge table.
Currently they are using a traditional Windows desktop application. When a user sets up this Windows program, they log in with a username and password. The system then displays for them a list of all the agencies that they belong to (normally just one, but some "power users" can belong to a few). They pick an agency, and then the program connects to the correct database. For the remainder of the session, everything that the user does will be done on that database.
The client wants to create a web app to eventually replace the Windows program (and the two will be running side by side for a while). One developer is creating the front end in Angular 5, and I am developing the API in ASP .Net Core 2.1.
So the web app will function in a similar manner to the Windows app. A user logs in to the web app. The web app, which consumes my Web API, tells the API which user just logged in. The API then checks which agency(s) this user belongs to from that database that stores that data. The API returns the list of agencies the user belongs to to the web app. There, the user picks an agency. From this point on, the web app will include this Agency ID in the header of all API calls. The API, when it receives a request from the web app, will know which database to use, based on the Agency ID in the header of the request.
Hope that makes sense...
Obviously this means that I will have to change the connection string of the DbContext on the fly, depending on which database the API must talk to. I've been looking at this, firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers. So I am trying to move this to the DbContext's OnConfiguring event. I was thinking it'd be best to create a DbContext Factory to create the DbContexts, using the appropriate connection string. I'm just a bit lost though. You see, when the web app calls an end point on the web api (let's say an HTTP GET request to get a list of accounts), this will fire the HttpGet handler in the Accounts controller. This action method then reads the Agency ID header. But this is all happening on the controller.... If I call the DbContext Factory from the DbContext's OnConfiguring() event, it would have to send the Agency ID (which was read in the controller) to the factory so that the factory knows which connection string to create. I'm trying not to use global variables to keep my classes loosely coupled.
Unless I have some service running in the pipeline that intercepts all requests, reads the Agency ID header, and this somehow gets injected into the DbContext constructor? No idea how I would go about doing this...
In summary, I'm a bit lost. I'm not even sure if this is the correct approach. I've looked at some "multi-tenant" examples, but to be honest, I've found them a bit hard to understand, and I was hoping I could do something a bit simpler for now, and with time, as my knowledge of .Net Core improves, I can look at improving the code correspondingly.
I am working on something similar you describe here. As I am also quite at the start, I have no silver bullet yet. There is one thing where could help you with your approach though:
firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers.
I took the approach of having a middleware being in charge of swapping the dbconnection string. Something like this:
public class TenantIdentifier
{
private readonly RequestDelegate _next;
public TenantIdentifier(RequestDelegate next)
{
_next = next;
}
public async Task Invoke(HttpContext httpContext, GlobalDbContext dbContext)
{
var tenantGuid = httpContext.Request.Headers["X-Tenant-Guid"].FirstOrDefault();
if (!string.IsNullOrEmpty(tenantGuid))
{
var tenant = dbContext.Tenants.FirstOrDefault(t => t.Guid.ToString() == tenantGuid);
httpContext.Items["TENANT"] = tenant;
}
await _next.Invoke(httpContext);
}
}
public static class TenantIdentifierExtension
{
public static IApplicationBuilder UseTenantIdentifier(this IApplicationBuilder app)
{
app.UseMiddleware<TenantIdentifier>();
return app;
}
}
Here I am using a self-created http-header called X-Tenant-Guid to identify the tenants GUID. Then I make a request to the global Database, where I do get the connection string of this tenants db.
I made the example public here. https://github.com/riscie/ASP.NET-Core-Multi-Tenant-multi-db-Example (it's not yet updated to asp net core 2.1 but it should not be a problem to do so quickly)

API modularization in Restlet

I have developed a web application based on Restlet API. As I am adding more features over time, I need sometimes to reuse similar group of REST API under different endpoints, which provides slightly different context of execution (like switching different instances of databases with same schema). I like to refactor my code to make the API reusable and reuse them at different endpoints. My initial thinking was to design an Application for each reusable API and attach them on the router:
router.attach("/context1",APIApplication.class)
router.attach("/foo/context2",APIApplication.class)
The API should be agnostic of configuration of the REST API. What is the best way to pass context information (for example the instance of database) to the Application API? Is this approach viable and correct? What are the best practices to reuse REST API in Restlet? Some code samples would be appreciated to illustrate your answer.
Thanks for your help.
I have see this basic set-up running using a Component as the top level object, attaching the sub applications to the VirtualHost rather than a router, as per this skeleton sample.
public class Component extends org.restlet.Component
{
public Component() throws Exception
{
super();
// Client protocols
getClients().add(Protocol.HTTP);
// Database connection
final DataSource dataSource = InitialContext.doLookup("java:ds");
final Configuration configuration = new Configuration(dataSource);
final VirtualHost host = getDefaultHost();
// Portal modules
host.attach("/path1", new FirstApplication());
host.attach("/path2", new SecondApplication(configuration));
host.attach("/path3", new ThirdApplication());
host.attachDefault(new DefaultApplication(configuration));
}
}
We used a custom Configuration object basically a pojo to pass any common config information where required, and used this to construct the Applications, we used separate 'default' Contexts for each Application.
This was coded originally against restlet 1.1.x and has been upgraded to 2.1.x via 2.0.x, and although it works and is reasonably neat there may possibly be an even better way to do it in either versions 2.1.x or 2.2.x.

Simplest way to use NHibernate for the official "ASP.Net MVC 3 Getting Started"-Tutorial

Clarified Updated Question - Start
In the official MVC 3 Getting Started-tutorial it seems to me that all we have to do to get ORM working are two steps.
First adding the simple MovieDBContext-code as described at the end of part 4 ..
public class MovieDBContext : DbContext
{
public DbSet<Movie> Movies { get; set; }
}
.. and second in the beginning of part 5, with a simple right-click on the Controllers folder we can auto-generate a MoviesController that implements CRUD()-functionality using Entity Framework by simply telling which Model to use.
Now when using the web-application we can already write and read from the database.
What would be the simplest (or a simple) way to get this done for our Movie-Model with NHibernate instead of using Entity Framework?
Clarified Updated Question - End
Original question (only for additional background-info):
I'm trying to create an ASP.Net MVC 3 application that uses NHibernate and Postgres.
Background Info
Development is done on Windows with Visual Web Developer Express, the production environment will be/should be Linux+Mono.
Steps that have worked so far:
An ASP.Net Dynamic Data Entities Web Application using Npgsql and Postgres as the DB.
Successfully run on Windows development machine.
(Following this tutorial)
An ASP.Net MVC 3 application without using a database/model yet:
Succesfully run on Windows development machine and deployed to Linux production environment using Mono and Nginx. (Only as a proof of concept for myself not as a web app used by the public.)
An ASP.Net MVC 3 application with a model using SQL Server Express as the DB.
Successfully run on my Windows development machine.
(Following the MVC 3 Getting Started-tutorial)
Question
So far I managed to get Postgres to work with a "Dynamic Data Entities Web Application" but with an MVC 3 Web app I'm stuck on where/how to start. For the last mentioned MVC-3-Movie-Webapp I want to switch the DB from SQL Server Express to Postgres using NHibernate and Npgsql (NHibernate since Mono doesn't support Entity Framework).
When you look at the end of part 4 there's the simple MovieDBContext-code
public class MovieDBContext : DbContext
{
public DbSet<Movie> Movies { get; set; }
}
and in the beginning of part 5, we autogenerate CRUD-stuff using Entity Framework by simply telling which Model to use.
(MoviesController.cs, Create.cshtml, Delete.cshtml, Details.cshtml, Edit.cshtml, and Index.cshtml)
So I have that working with Entity Framework and SQL Server Express, but how would I achieve the same result by using NHibernate? (doesn't have to be with postgres immediately, sticking with SQL-Server as a first step would be fine) (Hopefully with similar simplicity, but getting the result itself would be great)
I found a lot of old stuff and how I would manually map things, but what would be a good-up to date standard way of achieving this with NHibernate for MVC 3?
(The closest thing I found was the source code mentioned in this thread, but it's 64 MB unzipped I got several "Projects not loaded successfully"-errors and the author said he uses MVC 2 so I think it's a little over my head for being a complete NHibernate noob.)
I think showing how this is done could be very useful for others as well, since the original tutorial is very easy to follow and is linked as the official starting point for MVC 3 app-development on http://www.asp.net/mvc ("Your First ASP.NET MVC App").
So I think this would be a great up to date example about how to use NHibernate with MVC 3.
Actually, those automated things haven't helpful enough in real world applications. We have to separate concerns and by using DataContext in UI Layer is not a good practice because that dependency will cause problems like lack of test-ability, violation of best practices. I think you need to have following things of your project
Separation of Concern (Layered Architecture - UI Layer, Servie Layer, Domain Layer, Infrastructure Layer)
Generic Repository and Unit of Work wrapping (Database functionalities, ORM - EF, NHibernate, etc
In your Service Layer process repositories and unit of work processings and expose Data Transfer objects or your domain objects (POCOs) to UI Layer
Use IOC to inject dependencies will help you to minimize dependencies
Create Unit test and Integration tests
Use Continuous Integration and Source control prefer (Distributed: Mercurial)
Useful References:
(Sharp Architecture) http://sharparchitecture.codeplex.com/
(IOC Container) http://www.castleproject.org/container/
(Generic repository) http://code.google.com/p/genericrepository/
NuGet is your friend. Here's a good example of using NuGet to automatically wire in your dependencies and configuration pretty much automatically.
Hope this helps.
Suggestion, don't get hung up on all the automatic stuff that the tutorials are showing you. Microsoft is just trying to show that you can easily get things started if you don't try to do anything unique.
Now for your situation. When you're making a controller, you're wanting to bind that controller with a type of model that you created somewhere. With nHibernate I'm thinking that you'll have manually created these POCO's and that you're using one of the many ways to map those POCO's through nHibernate to your database.
You won't be able to use the Entity Framework options because they're depending upon the features of the framework to provide information on the object, database, etc. Easiest things is to just make a controller that either gives you the options for CRUD or use an empty controller to build up your own ActionResults.
Hope this helps some and good luck with your project.

How to handle multiple storage backends transparently

I'm working with an application right now that uses a third-party API for handling some batch email-related tasks, and in order for that to work, we need to store some information in this service. Unfortunately, this information (first/last name, email address) is also something we want to use from our application. My normal inclination is to pick one canonical data source and stick with it, but round-tripping to a web service every time I want to look up these fields isn't really a viable option (we use some of them quite a bit), and the service's API requires the records to be stored there, so the duplication is sadly necessary.
But I have no interest in peppering every method throughout our business classes with code to synchronize data to the web service any time they might be updated, and I also don't think my entity should be aware of the service to update itself in a property setter (or whatever else is updating the "truth").
We use NHibernate for all of our DAL needs, and to my mind, this data replication is really a persistence issue - so I've whipped up a PoC implementation using an EventListener (both PostInsert and PostUpdate) that checks, if the entity is of type X, and any of fields [Y..Z] have been changed, update the web service with the new state.
I feel like this is striking a good balance between ensuring that our data is the canonical source and making sure that it gets replicated transparently and minimizing the chances for changes to fall through the cracks and get us into a mismatch situation (not the end of the world if eg. the service is unreachable, we just do a manual batch update later, but for everybody's sanity in the general case, the goal is that we never have to think about it), but my colleagues and I still have a degree of uncomfortableness with this way forward.
Is this a horrid idea that will invite raptors into my database at inopportune times? Is it a totally reasonable thing to do with an EventListener? Is it a serviceable solution to a less-than-ideal situation that we can just make do with and move on forever tainted? If we soldier on down this road, are there any gotchas I should be wary of in the Events pipeline?
In case of unreliable data stores (web service in your case), I would introduce a concept of transactions (operations) and store them in local database, then periodically pull them from DB and execute against the Web Service (other data store).
Something like this:
public class OperationContainer
{
public Operation Operation; //what ever operations you need CRUD, or some specific
public object Data; //your entity, business object or whatever
}
public class MyMailService
{
public SendMail (MailBusinessObject data)
{
DataAcceessLair<MailBusinessObject>.Persist(data);
OperationContainer operation = new OperationContainer(){Operation=insert, Data=data};
DataAcceessLair<OperationContainer>.Persist(operation);
}
}
public class Updater
{
Timer EverySec;
public void OnEverySec()
{
var data = DataAcceessLair<OperationContainer>.GetFirstIn(); //FIFO
var webServiceData = WebServiceData.Converr(data); // do the logic to prepare data for WebService
try
{
new WebService().DoSomething(data);
DataAcceessLair<OperationContainer>.Remove(data);
}
}
}
This is actually pretty close to the concept of smart client - technically not logicaly. Take a look at book: .NET Domain-Driven Design with C#: Problem-Design-Solution, chapter 10. Or take a look at source code from the book, it's pretty close to your situation: http://dddpds.codeplex.com/

Best way to share data between .NET application instance?

I have create WCF Service (host on Windows Service) on load balance server. Each of this service instance maintain list of current user. E.g. Instance A has user A001, A002, A005, instance B has user A003, A004, A008 and so on.
On each service has interface that use to get user list, I expect this method to return all user in all service instance. E.g. get user list from instance A or instance B will return A001, A002, A003, A004, A005 and A008.
Currently I think that I will store the list of current users on database but this list seem to update so often.
I want to know, is it has another way to share data between WCF service that suit my situation?
Personally, the database option sounds like overkill to me just based on the notion of storing current users. If you are actually storing more than that, then using a database may make sense. But assuming you simply want a list of current users from both instances of your WCF service, I would use an in-memory solution, something like a static generic dictionary. As long as the services can be uniquely identified, I'd use the unique service ID as the key into the dictionary and just pair each key with a generic list of user names (or some appropriate user data structure) for that service. Something like:
private static Dictionary<Guid, List<string>> _currentUsers;
Since this dictionary would be shared between two WCF services, you'll need to synchronize access to it. Here's an example.
public class MyWCFService : IMyWCFService
{
private static Dictionary<Guid, List<string>> _currentUsers =
new Dictionary<Guid, List<string>>();
private void AddUser(Guid serviceID, string userName)
{
// Synchronize access to the collection via the SyncRoot property.
lock (((ICollection)_currentUsers).SyncRoot)
{
// Check if the service's ID has already been added.
if (!_currentUsers.ContainsKey(serviceID))
{
_currentUsers[serviceID] = new List<string>();
}
// Make sure to only store the user name once for each service.
if (!_currentUsers[serviceID].Contains(userName))
{
_currentUsers[serviceID].Add(userName);
}
}
}
private void RemoveUser(Guid serviceID, string userName)
{
// Synchronize access to the collection via the SyncRoot property.
lock (((ICollection)_currentUsers).SyncRoot)
{
// Check if the service's ID has already been added.
if (_currentUsers.ContainsKey(serviceID))
{
// See if the user name exists.
if (_currentUsers[serviceID].Contains(userName))
{
_currentUsers[serviceID].Remove(userName);
}
}
}
}
}
Given that you don't want users listed twice for a specific service, it would probably make sense to replace the List<string> with HashSet<string>.
A database would seem to offer a persistent store which may be useful or important for your application. In addition it supports transactions etc which may be useful to you. Lots of updates could be a performance problem, but it depends on the exact numbers, what the query patterns are, database engine used, locality etc.
An alternative to this option might be some sort of in-memory caching server like memcached. Whilst this can be shared and accessed in a similar (sort of) way to a database server there are some caveats. Firstly, these platforms are generally not backed by some sort of permanent storage. What happens when the memcached server dies? Second they may not be ACID-compliant enough for your use. What happens under load in terms of additions and updates?
I like the in memory way. Actually I am designing a same mechanism for one my projects I'm working now. This is good for scenarios where you don't have opportunities to access database or some people are really reluctant to create a table to store simple info like a list of users against a machine name.
Only update I'd do there is a node will only return the list of its available users to its peer and peer will combine that with its existing list. Then return its existing list to the peer who called. Thats how all the peers would be in sync with same list.
The DB option sounds good. If there are no performance issues it is a simple design that should work. If you can afford to be semi realtime and non persistent one way would be to maintain the list in memory in each service and then each service updates the other when a new user joins. This can be done as some kind of broadcast via a centralised service or using msmq etc.
If you reconsider and host using IIS you will find that with a single line in a config file you can make the ASP Global, Application and Session objects available. This trick is also very handy because it means you can share session state between an ASP application and a WCF service.