Authorization is Redis: ACL - redis

Since Redis supports ACL from v6.How can we achieve authorization at the key pattern? We want to implement a system in which multiple services have their own key pattern and we don't want any service can read other service's data.
For example:
Service Name
Keys Pattern
Service A
Service_A_::_
Service B
Service_B_::_
so that service A can't read data of service B and vice-versa.

Design the key as
{namespace}:{object type}:{identifier}:{optional name}.
Example :
public:users:{1234}:purchase
Key patterns restrictions can be done using ~:.
Example ~public: when used with setuser will allow user to have access to public namespaces. More information available at https://redis.io/topics/acl

Related

Multi-tenancy in Golang

I'm currently writing a service in Go where I need to deal with multiple tenants. I have settled on using the one database, shared-tables approach using a 'tenant_id' decriminator for tenant separation.
The service is structured like this:
gRPC server -> gRPC Handlers -
\_ Managers (SQL)
/
HTTP/JSON server -> Handlers -
Two servers, one gRPC (administration) and one HTTP/JSON (public API), each running in their own go-routine and with their own respective handlers that can make use of the functionality of the different managers. The managers (lets call one 'inventory-manager'), all lives in different root-level packages. These are as far as I understand it my domain entities.
In this regard I have some questions:
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC
API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header?
Really hope I'm asking the right questions.
Regards, Karl.
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
ORMs in Go are a controversial topic! Some Go users love them, others hate them and prefer to write SQL manually. This is a matter of personal preference. Asking for specific library recommendations is off-topic here, and in any event, I don't know of any multi-tenant ORM libraries – but there's nothing to prevent you using a wrapper of sqlx (I work daily on a system which does exactly this).
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
It would make sense to abstract this behavior from those internal services in a way which suits your programming and interface schemas, but there's no further details here to answer more concretely.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
context.Context is mostly about cancellation, not request propagation. While your use is acceptable according to the documentation for the WithValue function, it's widely considered a bad code smell to use the context package as currently implemented to pass values. Rather than use implicit behavior, which lacks type safety and many other properties, why not be explicit in the function signature of your downstream data layers by passing the tenant ID to the relevant function calls?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header? [sic]
The gRPC library is not opinionated about your design choice. You can use a header value (to pass the tenant ID as an "ambient" parameter to the request) or explicitly add a tenant ID parameter to each remote method invocation which requires it.
Note that passing a tenant ID between your services in this way creates external trust between them – if service A makes a request of service B and annotates it with a tenant ID, you assume service A has performed the necessary access control checks to verify a user of that tenant is indeed making the request. There is nothing in this simple model to prevent a rogue service C asking service B for information about some arbitrary tenant ID. An alternative implementation would implement a more complex trust-nobody policy whereby each service is provided with sufficient access control information to make its own policy decision as to whether a particular request scoped to a particular tenant should be fulfilled.

single WCF endpoint for all commands in Nservicebus

We are trying to build a Nservicebus service that can communicated with form and wpf based clients using WCF. I have read that you can inherit from WcfService.
like:
public class ThirdPartyWebSvc : WcfService<ThirdPartyCmd, ThirdPartyCmdResponse>
And then you simple create a endpoint in the app.config and you done like described here. but the problem is that i have to create a endpoint for every command.
I would like to have a single endpoint that excepts any command and returns its response.
public class ThirdPartyWebSvc : WcfService<ICommand, IMessage>
Can someone point me in the right direction? Using Nservicebus for client communication can't be done for us and i don't want to build a proxy like server unless thats the only way to do it.
Thanks
So from what I can gather, you want to expose a WCF service operation which consumers can call to polymorphically pass one of a number of possible commands to, and then have the service route that command to the correct NServiceBus endpoint which then handles the command.
Firstly, in order to achieve this you should forget about using the NserviceBus.WcfService base class, because to use this you must closely follow the guidance in the article you linked in your post.
Instead, you could:
design your service operation contract to accept polymorphic requests by using the ServiceKnownType attribute on your operation definition, adding all possible command types,
host the service using a regular System.ServiceModel.ServiceHost(), and then configure an NserviceBus.IBus in the startup of your hosted WCF service, and
define your UnicastBusConfig config section in your service config file by adding all the command types along with the recipient queue addresses
However, you now have the following drawbacks:
Because of the requirement to be able to pass in implementations of ICommand into the service, you will need to recompile your operation contract each time you need to add a new command type.
You will need to manage a large quantity of routing information in the config file, and if any of the recipient endpoints change, you will need to change your service config.
If your service has availability problems then no more messages to any of your NSB endpoints.
You will need to write code to handle what to do if you do not receive a response message from the NSB endpoints in a timely manner, and this logic may depend on the type of command sent.
I hope you are beginning to see how centralizing this functionality is not a great idea.
All the above problems would go away if you could get your clients to send commands to the bus in the standard way, but without msmq how can you do that?
Well, for a start you could look at using one of the other supported transports.
If none of these work for you and you have to use WCF hosted services, then you must follow the guidance in the linked article. This guidance is there to steer you in the correct direction - multiple WCF services sounds like a pain, until you try to centralize them into a single service - then the pain gets bigger, not less.

How to establish relationships between Spring Data REST / Spring HATEOAS based (micro) services?

Trying to figure out a pattern for how to handle relationships when using a hypermedia based microservices based on Spring Data Rest or HATEOAS.
If you have service A (Instructor) and Service B (Course) each exist as an a stand alone app.
What is the preferred method for establishing a relationship between the two services. In a manner that does not require columns for IDs of the foreign service. It would be possible for each service to have many other services that need to communicate in the same manor.
Possible solution (Not sure a correct path)
Each service has a second table with a OneToMany with the primary entity within the service. The table would have the following fields:
ID, entityID, rel, relatedID
Then in the opposite service using Spring Data Rest setup a find that queries the join table to find records that match.
The primary goal I want to accomplish would be any service can have relationships with any number of other services without having to have knowledge of the other service.
The basic steps are the following ones:
The service needs to discover the resources of the other service.
The service then adds a link to the resources it renders where necessary.
I have a very rudimentary example of these steps in this repository. The example consists of two services: a service to provide geo-spatial searches for stores. The second service is some rudimentary customer management that optionally integrates with store service if it is currently available.
Here's how the steps are implemented:
Resource discovery
In my example the consuming service (i.e. the customer one) uses Spring HATEOAS' Traverson API to traverse a set of link relations until it finds a link named by-location. This is done in StoreIntegration. So all the client service needs to know is the root URI (taken from the environment in my case) and a set of link relations. It periodically checks the link for existence using a HEAD-request.
This of course can be done in a more sophisticated manner: hard-wiring the base URI into the client service might be considered suboptimal but actually works quite well if you're using DNS anyway (so that you can exchange the actual host behind the URI hard-coded). Nonetheless it's a decent pragmatic approach, still rediscovers the other service if it changes URIs, no additional libraries required.
For an even more sophisticated approach have a look at Netflix' Eureka library which is basically a service registry. Also, you might wanna checkout the Spring Cloud integration we have for that.
Augmenting resources with links
Spring HATEOAS provides a ResourceProcessor API that Spring Data REST leverages. It allows you to manipulate the Resource instance about to be rendered and e.g. add links to it. The implementation for the customers service can be found here.
It basically takes the link just discovered in the steps above and expands it using well-known parameters and thus allows clients to trigger a store geo-search by just following the link.
Beyond that
You can find a more sophisticated variant of this example in the examples projects for Spring Cloud. It takes the very same example but switches to Spring Cloud components such as Eureka integration, gathering metrics, adding UI etc.
In my case I can only derive related items from the service itself. My goal is to abstract the related items to the point that any number of services can be related to a service and only need to lookup ID's or links. One thought was an #ElementCollection named related with a join of the entity ID of the service. Then in the #Embedded have a relLink field and a relatedID field. Then in the repository do a findby to find the relLink and relatedID.
The hope is to keep it abstracted enough to essentially mimic a Many to Many setup.

WCF Authenticate Client with database

I have a WCF service that supposed to provide service to several clients.
The WCF will have a Data-Access-Layer that will communicate with the database.
This is part of my design idea : My Design
As you can see - each client will connect to the 1st WCF service for pulling information (get product, update product), and also to the 2nd WCF service in a pub\sub manner, so it would be able to receive notifications about different things it wants.
I have a table in the database for 'Users' with all the users in the system.
(there is an administrator, a normal user and a technician).
My question is - how do I do the 'logging' in from the client to the database ?
My current idea - have a function in the services called 'Connect ( username, password )' and when a client connects - it will pass the username and password to be authenticated in the database, and only if authenticated - the client will start sending commands.
Problem with this is - anyone can write his own client that connects to my service and runs other functions without authenticating. I can solve this by saving in the service whether or not the client has authenticated.
But is there a better solution that just having a 'Connect' function in the service ?
Hope there is something simple yet effective.
You should create a custom user name and password validator that derives from the UserNamePasswordValidator abstract class and implements the Validate() method. Then you can validate the provided user name and password however you want. To learn more about setting this up, read this article.

OData / WCF Data Services metadata versioning

Is there any metadata versioning support in OData protocol and its WCF Data Services implementation?
Let us suppose that we have OData service that exposes the single Goods collection, and the Goods entity type has three properties: Key (string), Name (string) and AvailableSince(string). The service is already running, and there are some consumers that rely on this metadata schema.
Next, we want to update Goods entity type - for example replace property AvailableSince(string) by something else, or change it type from string to datetime - so we will have two versions of metadata, and consumers thatdepends on the first version of metadata will not be able to send correct requests in terms of 2nd metadata schema.
Is there any way to provide both metadata versions within the single service? If yes, then how consumer can specify metadata version in request, and how it should be processed on WCF side?
Thank to all in advance.
Short answer: NO.
Most metadata changes require either a new service or breaking existing clients.
If the existing set of clients is important, our general recommendation is to create a new service...
i.e. something like:
/v1/myservice.svc
&
/v2/myservice.svc
Alex
OData Program Manager
This recent article describes on what data changes new service version is required, and what changes do not require service update.
http://msdn.microsoft.com/en-us/library/ee473427.aspx