I have a requirement to build a Web API for an existing system. There are various agencies throughout the country, and each agency has its own database. All databases are on one single server. All databases are identical in structure. All databases have their own username and password. An agency has one or more users. A user can belong to one or more agencies. There is also one special database which contains a table of all users, a table of all agencies, and user-agencies bridge table.
Currently they are using a traditional Windows desktop application. When a user sets up this Windows program, they log in with a username and password. The system then displays for them a list of all the agencies that they belong to (normally just one, but some "power users" can belong to a few). They pick an agency, and then the program connects to the correct database. For the remainder of the session, everything that the user does will be done on that database.
The client wants to create a web app to eventually replace the Windows program (and the two will be running side by side for a while). One developer is creating the front end in Angular 5, and I am developing the API in ASP .Net Core 2.1.
So the web app will function in a similar manner to the Windows app. A user logs in to the web app. The web app, which consumes my Web API, tells the API which user just logged in. The API then checks which agency(s) this user belongs to from that database that stores that data. The API returns the list of agencies the user belongs to to the web app. There, the user picks an agency. From this point on, the web app will include this Agency ID in the header of all API calls. The API, when it receives a request from the web app, will know which database to use, based on the Agency ID in the header of the request.
Hope that makes sense...
Obviously this means that I will have to change the connection string of the DbContext on the fly, depending on which database the API must talk to. I've been looking at this, firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers. So I am trying to move this to the DbContext's OnConfiguring event. I was thinking it'd be best to create a DbContext Factory to create the DbContexts, using the appropriate connection string. I'm just a bit lost though. You see, when the web app calls an end point on the web api (let's say an HTTP GET request to get a list of accounts), this will fire the HttpGet handler in the Accounts controller. This action method then reads the Agency ID header. But this is all happening on the controller.... If I call the DbContext Factory from the DbContext's OnConfiguring() event, it would have to send the Agency ID (which was read in the controller) to the factory so that the factory knows which connection string to create. I'm trying not to use global variables to keep my classes loosely coupled.
Unless I have some service running in the pipeline that intercepts all requests, reads the Agency ID header, and this somehow gets injected into the DbContext constructor? No idea how I would go about doing this...
In summary, I'm a bit lost. I'm not even sure if this is the correct approach. I've looked at some "multi-tenant" examples, but to be honest, I've found them a bit hard to understand, and I was hoping I could do something a bit simpler for now, and with time, as my knowledge of .Net Core improves, I can look at improving the code correspondingly.
I am working on something similar you describe here. As I am also quite at the start, I have no silver bullet yet. There is one thing where could help you with your approach though:
firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers.
I took the approach of having a middleware being in charge of swapping the dbconnection string. Something like this:
public class TenantIdentifier
{
private readonly RequestDelegate _next;
public TenantIdentifier(RequestDelegate next)
{
_next = next;
}
public async Task Invoke(HttpContext httpContext, GlobalDbContext dbContext)
{
var tenantGuid = httpContext.Request.Headers["X-Tenant-Guid"].FirstOrDefault();
if (!string.IsNullOrEmpty(tenantGuid))
{
var tenant = dbContext.Tenants.FirstOrDefault(t => t.Guid.ToString() == tenantGuid);
httpContext.Items["TENANT"] = tenant;
}
await _next.Invoke(httpContext);
}
}
public static class TenantIdentifierExtension
{
public static IApplicationBuilder UseTenantIdentifier(this IApplicationBuilder app)
{
app.UseMiddleware<TenantIdentifier>();
return app;
}
}
Here I am using a self-created http-header called X-Tenant-Guid to identify the tenants GUID. Then I make a request to the global Database, where I do get the connection string of this tenants db.
I made the example public here. https://github.com/riscie/ASP.NET-Core-Multi-Tenant-multi-db-Example (it's not yet updated to asp net core 2.1 but it should not be a problem to do so quickly)
Related
I'm working on a requirement to change an existing ASP.NET MVC application to become multi-tenant ready. The application was built for "only one customer" by other means, for each client there's a new installation of the MVC app. The application has its database structure prepared to have "multi" websites inside one MVC app, so all the database queries already take the "site" into consideration (siteId).
I have several questions regarding multi-tenancy applications and I'm still studying it. Today I started doing changes on the MVC app and I came across on one thing. The application has a table with several configurations. Things like AppSMTPServer, AppShowLoginBox and etc. These are parameters created to make the app dynamic.
All these configurations are currently stored in the ApplicationState inside a static class, something like this:
public static IDictionary<String, String> Configurations
{
get
{
if (HttpContext.Current.Application[CONFIGURATIONS] == null)
{
LoadConfiguration();
}
return (IDictionary<String, String>)HttpContext.Current.Application[CONFIGURATIONS];
}
private set
{
HttpContext.Current.Application[CONFIGURATIONS] = value;
}
}
My question is. If I change the MVC to become multi-tenant ready, each tenant will have its own configuration values. So, I cannot store them in the ApplicationState anymore as it will be populated on application_start and will stay there for good.
What are the options for storing tenant specific configuration data? I looked on several sites and couldn't find a "good practices" on this. If I missed something that would help, please leave a comment. Thanks!
In my experience in building multi-tenant app's this use-case can be handled as follows,
Data remains in the Db
Upon a tenant login, we might require their config values, we can fetch from the db store and add them to a cache [redis - distributed cache]
similarly for each tenant hit, we can cache them, this way as the application is being repeatedly used, the more static data goes in to the cache and lesser the load on the app and the cache and higher the response times
So I have a fairly comprehensive activity-based access control system I built for a web app under MVC 4 using Entity Framework. Well, to be precise the access control doesn't care if it's using EF or not, but the app is.
Anyway, I'm loading the user's permissions on each request right now. I get a reference to my DbContext injected from the IoC container into my ApplicationController, and it overrides OnAuthorization to stuff the user's profile into the HttpContext.Current.Items. Seems to work fairly well, but I can't help but wonder if it's the best way.
My thought was that since the users' permissions don't change often, if ever, the better way to do it would be to load the profile of permissions into the Session instead, and then not have to change them at all until the user logs out and logs back in (pretty common in desktop OS's anyway). But I'm concerned that if I fetch using the DbContext, then the object I get back is a dynamic proxy which holds a reference to the DbContext and I certainly don't want to do that for the whole session.
Thoughts? Is this a good approach, and if so how do I ensure that my DbContext doesn't linger beyond when I really need it?
Invoke .AsNoTracking() on the Set<UserPermission> before you query out. Entities will still be proxied, but will be detached from the DbContext.
var userPermission = dbContext.Set<UserPermission>().AsNoTracking()
.SingleOrDefault(x => x.UserName == User.Identity.Name);
Thoughts? Is this a good approach?
Putting a dynamically proxied entity in session will break as soon as you load balance your code across more than 1 web server. Why? Because of the dynamic proxy class. Server A understands the type DynamicProxies.UserPermission_Guid, because it queried out the entity. However Server B through N do not, and therefore cannot deserialize it from the Session. The other servers will dynamically proxy the entity with a different GUID.
That said, you could DTO your data into a POCO object and put it in session instead. However then you do not need to worry about your entity being attached to the context when you first query it out. AsNoTracking will only make the query perform a bit faster.
// you can still call .AsNoTracking for performance reasons
var userPermissionEntity = dbContext.Set<UserPermission>().AsNoTracking()
.SingleOrDefault(x => x.UserName == User.Identity.Name);
// this can safely be put into session and restored by any server with a
// reference to the DLL where the DTO class is defined.
var userPermissionSession = new UserPermissionInSession
{
UserName = userPermissionEntity.UserName,
// etc.
};
Thoughts? Is this a good approach?
Another problem attached to this approach is when you use the common pattern that create one dbContext per http request. This pattern typically dispose dbContext when the request ends.
protected virtual void Application_EndRequest(object sender, EventArgs e)
But what happen when we try to get navigation property of a proxy entity which reference to a disposed DbContext?
We will get a ObjectDisposedException
I have a domain service called OrderService, with a saveOrder() method:
class OrderService
{
// ...
public function saveOrder(Order $order)
{
$this->orderRepository->add($order);
// $this->entityManager->flush();
$this->notificationService->notifyOrderPlaced($order);
}
}
saveOrder() adds the order to the repository (which internally calls persist() on the EntityManager), then passes the Order to the NotificationService to send appropriate notifications (email, SMS).
The problem is, while NotificationService needs the order ID to include in the notifications, the Order has no ID yet as it's not been persisted to the DB (the ID is auto generated).
The obvious solution seems to pass the EntityManager as a dependency to the OrderService, and flush() right after the repository add() method, as in the example above. But I've always been reluctant to make the domain Services aware of the EntityManager, preferring to let them talk only to repositories, or other services.
What are the drawbacks, if any, of a domain Service having a dependency on the EntityManager?
Is there a better alternative?
Note: I'm using PHP and the Doctrine ORM, but I believe the same principles apply to Java & Hibernate as well.
You may want to consider one of these options (or both)
Make this service an Application layer service instead of a Domain service. It's perfectly OK to call your change tracker in an Application service since it is supposed to know about the application context and progress in the current use case. Typical application services will commit the business transaction/ask the change tracker to save changes when they're done, so why not call it to generate Id's as well ?
If you're concerned about the database being involved in the middle of a use case, maybe you can find an equivalent to NHibernate's Guid.Comb strategy to make your ORM generate an Id without issuing an INSERT to the database right away.
Use a Domain event. Upon creation, an Order could inform the world that it has been newed up. The notification service would handle the event and send appropriate notifications. You'll find an example of that here (it also includes an Application layer service to take care of the business transaction).
i am really wanting to get my head around this WCF technology and it seems the last months of information cramming has somewhat distorted my overall concept of how i should build my client/server application.
If someone out there could shed some light on the best practises when developing my app and implementing a Duplex WCF service with multiple interfaces.
General outline: I am wanting to develop an app where users connect to a server and lets say'.. add contacts to an sql database. I have discovered many ways of doing this but would ultimatly like to know im heading down the right path when it comes time to developing the app further.
Some models i have discovered are...
Client has its own LINQ to SQL classes and handles all data to and from data.... BAD. really slow. overheads with LINQ and SQL connections amongst poor implementation of Linq Select command.
Another model was the develop the service to implement the linq to sql commands which are used for CRUD operations however this still doesnt provide live data updates to other clients connected to the service.
So i made a basic app that when a client logs in the to the service there Callback Channel gets added to the Callback List. When a client feeds in a new contact to the service, it invokes a callback to all channel clients with the new contact and the client side function takes care of adding the contact to the right spot.
So now i want to implement a User object and perhaps 2 more other business objects say Project and Item and lets say Item... my idea is to create my service like this
[Serializable]
[DataContract]
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public class Project: IProject
{
[DataMember()]
public int projectID;
public int Insert(objSubItem _objSubItem)
{
// code here
}
etc and
[ServiceContract(
Name = "Project",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IProjectCallback))]
public interface IProject
{
/// <summary>
/// Inserting a Project record to the database
/// </summary>
/// <param name="_project">Project from Client</param>
/// <return>ProjectID back to the client if -1 then fail</return>
[OperationContract()]
int Insert(Project _project);
and
public interface IProjectCallback
{
/// <summary>
/// Notifies the clients that a Project has been added
/// </summary>
/// <param name="_project">Inserted Project</param>
[OperationContract(IsOneWay = true)]
void NotifyProjectInserted(Project _project);
}
obviously i have other crud functions and functions to ensure that both client and server data records are read only when being editited.
now if i have multi objects what is it the best way to lay it out.
Im thinking to create a servce.cs and an Iservice.cs and an IserviceCallback to negotiate the client channel population.. sould i also use partial classes of the service to implement the Iproject and IUser to properly ivoke the service callbacks aswell as invoking the objects insert.
would i do it like this
[ServiceContract(Name = "Service",
Namespace = "",
SessionMode = SessionMode.Required,
CallbackContract = typeof(IServiceCallBack))]
[ServiceKnownType(typeof(Project))]
[ServiceKnownType(typeof(User))]
public interface IService
{
// code here
}
and also
[ServiceBehavior(
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerCall)]
public partial class Service : IUser
{
public int Insert(User _User)
{
//
}
}
public partial class Service : IProject
{
public int Insert(Project _project)
{
// code here
}
}
public partial class Service : IService
{
// functions here
}
}
if feels as though the approach feels right if it was for one interface but feel that i need some "Best Practice" assistance.
Many thanks in advance,,
Chris Leach
Hi Richard,
I appreciate your response. As you see this is my first post and third ever on any forum related to programming. I have lived my programming life very close to google as shown by my google autofill history but its time to start asking questions of my own so i thank-you for your assistance so far. I am really wanting to understand an overall approach to how best managing data consistency amongst a distributed client/service application. I am looking into Telerik ORM and also Entity Framework as a solution and exposing the entities through a WCF service but i lack the understanding to implement data consistency amongst the clients. i have managed to develop a netDualTcp chat application and have used a list of client callback context to send join/leave and chat functions. I lack the overall picture however it seems that if i have a in memory (static) version of all of the tables in my sql database and either have the clients bind directly to these lists if this is possible or it seems best for my custom user controls to handle the connections so the server is aware of who has that particular user control open and can direct changes to those clients who are registered to the callback contract. that way the clients arent having to load the entire project every time they wish to open the application. I am thinking of a multi purpose application such as a contact/grant application program where users will be using different parts of the application and do not always need to access all of the information at one time. When the user first logs in i am hoping that the service will attach a callback contract for the client and several bits of information are loaded back to the client on authentaction such as a basic state i.e if they are an admin they get notifications etc. once they are logged in they are presented with a blank canvas but then begin to load custom user controls into a docking panel type interface. i guess this is where i become a little stuck about how to best manage concurrency and consistency whilst minimizing load/data transfer times to the client and freeing up cpu proccess time on both client. I know in programming there are multiple ways of doing this but i would like to know from the people on this forum what they feel the best approach to this type of soultion is. I understand its a deep topic but i feel i have come this far and a guiding hand would be appreciated. Thanks again
Generally I find taking a non-abstract view of a service gets me to the right place. What is it that consumers of my service are going to need to do?
I obviously have internal domain objects that are used by my business layer to create and manipulate the data. However, the way the business layer does tings isn;t necessarily the best way to partition functionality for my service.
So for example, if any project should have at least one user in it then when you create the project you should send over at least one user at the same time. The service operations need to encapsulate all of the data required to carry out a self contained business transaction.
Similarly, the death knell of many distributed systems is latency - they require lots of round trips to complete something. So, for example, you want to be able to add a user to a project; in reality you probably want to add a number of users to as project. Therefore, you should model the operation to accept a list of users not a single one which must be invoked multiple times
So a project service should allow you to do all the things related to a project, or projects, through a service contract. If users can live independently of projects then also have a user service. If they cannot then don;t have a user service as everything needs to be project focussed.
Business transactions are often more than straight forward CRUD operations on domain entities and the service should model them rather than reflecting the data model
My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}