Multi-tenancy in Golang - sql

I'm currently writing a service in Go where I need to deal with multiple tenants. I have settled on using the one database, shared-tables approach using a 'tenant_id' decriminator for tenant separation.
The service is structured like this:
gRPC server -> gRPC Handlers -
\_ Managers (SQL)
/
HTTP/JSON server -> Handlers -
Two servers, one gRPC (administration) and one HTTP/JSON (public API), each running in their own go-routine and with their own respective handlers that can make use of the functionality of the different managers. The managers (lets call one 'inventory-manager'), all lives in different root-level packages. These are as far as I understand it my domain entities.
In this regard I have some questions:
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC
API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header?
Really hope I'm asking the right questions.
Regards, Karl.

I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
ORMs in Go are a controversial topic! Some Go users love them, others hate them and prefer to write SQL manually. This is a matter of personal preference. Asking for specific library recommendations is off-topic here, and in any event, I don't know of any multi-tenant ORM libraries – but there's nothing to prevent you using a wrapper of sqlx (I work daily on a system which does exactly this).
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
It would make sense to abstract this behavior from those internal services in a way which suits your programming and interface schemas, but there's no further details here to answer more concretely.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
context.Context is mostly about cancellation, not request propagation. While your use is acceptable according to the documentation for the WithValue function, it's widely considered a bad code smell to use the context package as currently implemented to pass values. Rather than use implicit behavior, which lacks type safety and many other properties, why not be explicit in the function signature of your downstream data layers by passing the tenant ID to the relevant function calls?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header? [sic]
The gRPC library is not opinionated about your design choice. You can use a header value (to pass the tenant ID as an "ambient" parameter to the request) or explicitly add a tenant ID parameter to each remote method invocation which requires it.
Note that passing a tenant ID between your services in this way creates external trust between them – if service A makes a request of service B and annotates it with a tenant ID, you assume service A has performed the necessary access control checks to verify a user of that tenant is indeed making the request. There is nothing in this simple model to prevent a rogue service C asking service B for information about some arbitrary tenant ID. An alternative implementation would implement a more complex trust-nobody policy whereby each service is provided with sufficient access control information to make its own policy decision as to whether a particular request scoped to a particular tenant should be fulfilled.

Related

Authentication Providers, design pattern to adopt

I have a windows phone 8.1 client. This client connects to a Web API (ASP.NET) and fetches the supported Authentication Providers. At the moment its Google and Twitter. The user (wp 8.1) can select which provider he wants to use for the authentication purpose.
Based on the provider selected on the phone the underlying implementation flow for the authentication is different, in other words Google has one flow and Twitter has another flow. Because of this I have switch statements in my client that looks like the following
switch(authProvider)
case: "Google":
GoogleAuthProvider.PerfomAuthentication();
break;
case: "Twitter"
TwitterAuthProvider.PerformAuthentication();
break;
My main problem around this is that I am now hard coding the provider. The rest of my phone app uses IOC (MVVMLight) and in t this case I am hard coding. How do I get rid of this, without explicitly referring to the container? Plus lets say at a later point in time an additional auth provider is supported, then based on the current implementation I need to modify the client code as well, how do I minimize this?
From the example you provided the State GoF pattern will be up to the task assuming that the authentication interface is uniform (consists of one method – PerformAuthentication and potentially of other methods that are common across all the other possible providers). So that you have to create the interface IAuthenticationProvider and inject its implementation into the logic that actually gets executed (the logic that previously contained switch).
In fact it is very similar to the Strategy that is injected via DI but Strategy just encapsulates the algorithm where State is more powerful and suitable for the domain of authentication (it might have …state and other methods/properties – not that bad, right? :)
If you face the providers with different functional capabilities and interfaces you might want to choose the Bridge pattern that unites the heterogeneous authentication interfaces under the umbrella of a single interface. But it seems to me that the usage of Bridge would be an overengineering here.

Is shared assembly the only way to create objects from WCF REST service

I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.

Secure WCF Services using WIF/STS - decorate methods with required claims?

I am looking at securing some WCF services using WIF, and have read within the Identity Training Kit from Microsoft, within exercise 1, "Furthermore, you can expect developers to assign conditions via Code Access Security style calls (i.e. decorating via attributes and so on). Both capabilities will require some coding support"
(midway through this article:
http://channel9.msdn.com/Learn/Courses/IdentityTrainingCourse/WebServicesAndIdentity/WebServicesAndIdentityLab/Exercise-1-Using-Windows-Identity-Foundation-to-Handle-Authentication-and-Authorization-in-a-WCF-Ser
)
However I'm unable to find any documentation regarding how to implement a solution that makes use of this decoration approach. I don't really have any need for using the claims within the actual WCF method or business logic, but simply want to use WIF/STS to secure access to the method. Any tips on whether this is the best approach, and how to secure methods using decorations would be appreciated.
I think you can take a look at PostSharp. You can implement your cross cutting concerns using AOP and then apply them as attributes to decorate your methods. So your checks would be isolated in well knows places and the business methods would have specified in the security attributes the claims required to execute those methods.
Or, for simple cases, you can use this (I think you were referring to these):
[ClaimsPrincipalPermission(SecurityAction.Demand, Operation = "Operation1", Resource = "Resource1")]
You can also implement an IOperationInvoker. Attribute your contract, and implement with a behavior. Spin through the channels and endpoints at startup, reflect on your operations for attributes on the methods and/or parameters to setup your checks. Then apply the checks when the operation is invoked.
There are a couple of good articles around. Though I can only find the one below.
http://msdn.microsoft.com/en-us/magazine/cc163302.aspx

WCF Business logic handling

I have a WCF service that supports about 10 contracts, we have been supporting a client with all the business rules specific to this client now we have another client who will be using the exact same contracts (so we cannot change that) they will be calling the service exactly the same way the previous client called now the only way we can differentiate between the two clients is by one of the input parameters. Based on this input parameter we have to use a slightly different business logic – the logic for both the Client will be same 50% of the time the remainder will have different logic (across Business / DAL layers) . I don’t want to use if else statement in each of contract implementation to differentiate and reroute the logic also what if another client comes in. Is there a clean way of handling a situation like this. I am using framework 3.5. Like I said I cannot change any of the contracts (service / data contract ) or the current service calling infrastructure for the new client. Thanks
Can you possibly host the services twice and have the clients connect to the right one? Apart from that, you have to use some kind of if-else, I guess.
I can't say whether this is applicable to you, but we have solved a similar problem along this path:
We add a Header information to the message that states in which context some logic is called.
This information ends up in a RequestContext class.
We delegate responsibility of instantiating the implementation of the contract to a DI Container (in our case StructureMap)
We have defined a strategy how certain components are to be provided by the container:
There is a default for a component of some kind.
Attributes can be placed on specializations that denote for which type of request context this specialization should be used.
This is registered into the container through available mechanisms
We make a call to the Container by stating ObjectFactory.With(requestcontext).getInstance<CONTRACT>()
Dependencies of the service implementations are now resolved in a way that the described process is applied. That is, specializations are provided based ultimately on a request information placed in the header.
This is an example how this may be solvable.

MVVM on top of claims aware web services

I'm looking for some input for a challenge that I'm currently facing.
I have built a custom WIF STS which I use to identify users who want to call some WCF services that my system offers. The WCF services use a custom authorization manager that determines whether or not the caller has the required claims to invoke a given service.
Now, I'm building a WPF app. on top of those WCF services. I'm using the MVVM pattern, such that the View Model invokes the protected WCF services (which implement the Model). The challenge that I'm facing is that I do not know whether or not the current user can succesfully invoke the web service methods without actually invoking them. Basically, what I want to achieve is to enable/disable certain parts of the UI based on the ability to succesfully invoke a method.
The best solution that I have come up with thus far is to create a service, which based on the same business logic as the custom authorization policy manager will be able to determine whether or not a user can invoke a given method. Now, the method would have to passed to this service as a string, or actually two strings, ServiceAddress and Method (Action), and based on that input, the service would be able to determine if the current user has the required claims to access the method. Obviously, for this to work, this service would itself have to require a issued token from the same STS, and with the same claims, in order to do its job.
Have any of you done something similar in the past, or do you have any good ideas on how to do this?
Thanks in advance,
Klaus
This depends a bit on what claims you're requiring in your services.
If your services require the same set of claims, I would recommend making a service that does nothing but checks the claims, and call that in advance. This would let you "pre-authorize" the user, in turn enabling/disabling the appropriate portions of the UI. When it comes time to call your actual services, the user can just call them at will, and you've already checked that it's safe.
If the services all require different sets of claims, and there is no easy way to verify that they will work in advance, I would just let the user call them, and handle this via normal exception handling. This is going to make life a bit trickier, though, since you'll have to let the user try (and fail) then disable.
Otherwise, you can do something like what you suggested - put in some form of catalog you can query for a specific user. In addition to just passing a address/method, it might be nicer to allow you to just pass an address, and retrieve the entire set of allowed (or disallowed, whichever is smaller) methods. This way you could reduce the round trips just for authentication.
An approach that I have taken is a class that does the inspection of a ClaimSet to guard the methods behind the service. I use attributes to decorate the methods with type, resource and right property values. Then the inspection class has a Demand method that throws an exception if the caller's ClaimSet does not contain a Claim with those property values. So before any method code executes, the claim inspection demand is called first. If the method is still executing after the demand, then the caller is good. There is also a bool function in the inspection class to answer the same question (does the caller have the appropriate claims) without throwing an exception.
I then package the inspection class so that it is deployed with clients and, as long as the client can also get the caller's ClaimSet (which I provide via a GetClaimSet method on the service) then it has everything it needs to make the same evaluations that the domain model is doing. I then use the bool method of the claim inspection class in the CanExecute method of ICommand properties in my view models to enable/disable controls and basically keep the user from getting authorization exceptions by not letting them do things that they don't have the claims for.
As far as how the client knows what claims are required for what methods, I guess I leave that up to the client developer to just know. In general on my projects this isn't a big problem because the methods have been very classic crud. So if the method is to add an Apple, then the claim required is intuitively going to be Type = Apple, Right = Add.
Not sure if this helps your situation but it has worked pretty well on some projects I have done.