Best practices for inter-microservice authentication on Kubernetes? - authentication

I'm writing a service to be deployed on Kubernetes. Clients will be other services, not people, and those services may be in other namespaces or even clusters. My goals are:
Authenticate the calling services
Authorize the calling services
Apply some policies based on the identity of the calling service (like quota)
I understand that Kubernetes doesn't provide services that really help with any of these, and I'll need to build something explicit into my service. I'd like to understand what the current best practice is and how to maximize what's available in Kubernetes or in the ecosystem to make these goals achievable while minimizing the coding and administrative burden. A few options that I've considered:
Custom username / shared-secret. I could just pass out shared secrets to all of the calling services, and write my own custom code to verify that the shared secret matches. I assume passing these around as Bearer tokens would be the right move. Would using Kubernetes serviceaccount and role objects be reasonable containers for these shared secrets? If so, are there libraries that make the lookups, associations, and policy work easier?
JWT. JWT seems more intended for passing around claims, like end-user identity, and would seem to require that all of the participating components share the same JWT secret. Since I don't want calling-service-foo to be able to authenticate as calling-service-bar, it's not clear that JWT is the right move. Thoughts?
mTLS. I could issue TLS certificates for all of the participating services. Are there components I can use to automate the issuance of these certificates? Should I try to use Kubernetes serviceaccount or role objects to manage these, or maybe roll my own CRDs?
Istio. It seems like Istio can do a lot of this transparently, but so far all of the resources I've found that explain this seem to assume transparency is a goal. Since I will need the identities of the calling services, though, is it possible to get that out of Istio? Can this work if my callers aren't in my cluster?
SPIRE (spiffe.io). This looks like it matches well for my use cases, but it seems new and I don't know how much experience people have with it.
Do any of these options (and please correct my understanding of any of them) stand out as best practices, or are there others I should be considering?
Thank you!

What you need is a component that acts as the gateway to the microservices API endpoints. That kind of component belongs to a category of software called "API management" (Wikipedia page) and its usage is not limited to Kubernetes only.
There are many choices of API management software such as listed in the Wikipedia page but my project uses Gravitee and so far we are loving it due to its simple administration UI. Feel free to explore it at https://gravitee.io/.
N.B. I'm not related in any way to Gravitee.io apart from being one of the users (although I did contribute to one PR)

I know I'm late now but if anyone else is looking:
Istio has included Multicluster support and it makes the communication painless.
reference: https://istio.io/latest/docs/setup/install/multicluster

Related

Is Api Keys authentication sufficient for Google reCAPTCHA Enterprise?

There are primarily two ways to authenticate using Google's reCAPTCHA Enterprise in a non-Google cloud environment (we use AWS). Google recommends using Service Accounts along with their Java client. However, this approach strikes me as less preferable than the second way Google suggests we can can authenticate, namely, using Api Keys. Authenticating via an Api Key is easy, and it’s similar to how we commonly integrate with other 3rd party services, except rather than a username and password that we must secure, we have an Api Key that we must secure. Authenticating using Service Accounts, however, requires that we store a Json file on each environment in which we run reCAPTCHA, create an environment variable (GOOGLE_APPLICATION_CREDENTIALS) to point to that file, and then use Google's provided Java client (which leverages the environment variable/Json file) to request reCAPTCHA resources.
If we opt to leverage Api Key authentication, we can use Google’s REST Api along with our preferred Http Client (Akka-Http). We merely include the Api Key in a header that is encrypted as part of TLS in transit. However, if we opt for the Service Accounts method, we must use Google’s Java client, which not only adds a new dependency to our service, but also requires its own execution context because it uses blocking i/o.
My view is that unless there is something that I’m overlooking--something that the Service Accounts approach provides that an encrypted Api Key does not--then we should just encrypt an Api Key per environment.
Our Site Key is locked down by our domain, and our Api Key would be encrypted in our source code. Obviously, if someone were to gain access to our unencrypted Api Key, they could perform assessments using our account, but not only would it be exceedingly difficult for an attacker to retrieve our Api Key, the scenario simply doesn't strike me as likely. Performing free reCAPTCHA assessments does not strike me as among the things that a sophisticated attacker would be inclined to do. Am I missing something? I suppose my question is why would we go through the trouble of creating a service account, using the (inferior) Java client, storing a Json file on each pod, create an environment variable, etc. What does that provide us that the Api Key option does not? Does it open up some functionality that I'm overlooking?
I've successfully used Api Keys and it seems to work fine. I have not yet attempted to use Service Accounts as it requires a number of things that I'm disinclined to do. But my worry is that I'm neglecting some security vulnerability of the Api Keys.
After poring a bit more over the documentation, it would seem that there are only two reasons why you'd want to explicitly choose API key-based authentication over an Oauth flow with dedicated service accounts:
Your server's execution environment does not support Oauth flows
You're migrating from reCAPTCHA (non-enterprise) to reCAPTCHA Enterprise and want to ease migration
Beyond that, it seems the choice really comes down to considerations like your organization's security posture and approved authentication patterns. The choice does also materially affect things like how the credentials themselves are provisioned & managed, so if your org happens to already have a robust set of policies in place for the creation and maintenance of service accounts, it'd probably behoove you to go that route.
Google does mention sparingly in the docs that their preferred method for authentication for reCAPTCHA Enterprise is via service accounts, but they also don't give a concrete rationale anywhere I saw.

How to disable HATEOAS in production?

My REST API is consumed only by our own mobile apps and it also has some restricted endpoints to be used by an admin UI. It is easy to get the point of entry by observing communication of the mobile app. From there it is very easy for a malicious user to discover all possible endpoints using HATEOAS.
Even if the API is properly protected by Spring Security, there are known security leaks like https://jira.spring.io/browse/DATAREST-1144 which allow modification to read-only data.
During development, HATEOAS is useful, but I want to disable it in production to make discovery of endpoints more difficult.
Your proposed solution implies that you intend to rely on Security through Obscurity to protect your admin functions. This is a bad idea, because a malicious actor could still guess the relevant functions that they shouldn't have access to, or simply remember what they learned by traversing your link hierarchy when you last exposed it.
You should definitely implement a robust authentication and authorization mechanism to protect your resources, and then even if a bad actor can discover the routes to those protected resources through the links structure of an unprotected resource, they won't be able to access them. Links are meant to be discovered and even if they are discovered by someone they aren't intended for, security best practices should still ensure they do no harm.
Rather than disabling HATEOAS (your app should be using it as a lookup for endpoints!) you should trust your security layer to do the best job it can.
Here are some considerations:
Add user authorisation to your admin endpoints. Perhaps you can achieve this with your existing API key infrastructure.
Move the admin endpoints to a separate, protected API
Neither of these suggestions are quick fixes but I can't recommend disabling HATEOAS. Once it is gone, you will find an issue where you need it.

Microservices - IPC authentication/authorization

We're trying to figure out a best practice for IPC authentication and authorization. I'll explain.
We have a micro-services based architecture SaaS with a dedicated service for authentication. This service is responsible for doing the authentication and managing auth tokens (JWTs).
Everything works perfectly good with users that login and start to consume resources from the different services.
The question now is how to authentication and authorize requests which being initiated by other services (without the context of a specific user)?
Should we generate a dedicated user per service and treat it like
any other user in the system (with appropriate permissions)?
Should
we have a "hard coded"/dynamic token deployed among the services?
Any other ideas?
Our biggest concern is such tokens/passwords will be compromised at some point since requests from one service to another is treated with high level of permissions.
Cheers,
I'm not a microservices expert, just started to get my feet wet in the microservices world. From what I've read until now, this could be handled in many ways, one of which as you mentioned is hard coding api-keys so that services recognise one another. But I never liked this idea personally - also using a user per service like you mentioned. A solution I really liked is using Oauth2 for handling these scenarios - an interesting implementation I found is Gluu Server and I think client credentials grant type is what you're looking for - refer https://gluu.org/docs/integrate/oauth2grants/.
Have fun :)
Normally, API Gateway is integral part of any MS system.
All the services encapsulated and should be not accessible without API Gateway.
Such encapsulation allows direct communication between the services, without providing the requester payload, which should be required if the request comes straight from API Gateway.
In that case the request threated as something different, and follows different logic/middleware pipeline. No additional special users needed.

Microservices - how to solve security and user authentication?

There is a lot of discussion about microservice architecture. What I am missing - or maybe what I did not yet understand is, how to solve the issue of security and user authentication?
For example: I develop a microservice which provides a Rest Service interface to a workflow engine. The engine is based on JEE and runs on application servers like GlassFish or Wildfly.
One of the core concepts of the workflow engine is, that each call is user centric. This means depending of the role and access level of the current user, the workflow engine produces individual results (e.g. a user-centric tasklist or processing an open task which depends on the users role in the process).
In my eyes, thus a service is not accessible from everywhere. For example if someone plans to implement a modern Ajax based JavaScript application which should use the workflow microservice there are two problems:
1) to avoid the cross-scripting problem from JavaScript/Ajax the JavaScript Web application needs to be deployed under the same domain as the microservice runs
2) if the microservice forces a user authentication (which is the case in my scenario) the application need to provide a transparent authentication mechanism.
The situation becomes more complex if the client need to access more than one user-centric microservices forcing user authentication.
I always end up with an architecture where all services and the client application running on the same application server under the same domain.
How can these problems be solved? What is the best practice for such an architecture?
Short answer: check OAUTH, and manage caches of credentials in each microservice that needs to access other microservices. By "manage" I mean, be careful with security. Specially, mind who can access those credentials and let the network topology be your friend. Create a DMZ layer and other internal layers reflecting the dependency graph of your microservices.
Long answer, keep reading. Your question is a good one because there is no simple silver bullet to do what you need although your problem is quite recurrent.
As with everything related with microservices that I saw so far, nothing is really new. Whenever you need to have a distributed system doing things on behalf of a certain user, you need distributed credentials to enable such solution. This is true since mainframe times. There is no way to violate that.
Auto SSH is, in a sense, such a thing. Perhaps it may sound like a glorified way to describe something simple, but in the end, it enables processes in one machine to use services in another machine.
In the Grid world, the Globus Toolkit, for instance, bases its distributed security using the following:
X.509 certificates;
MyProxy - manages a repository of credentials and helps you define a chain of certificate authorities up to finding the root one, which should be trusted by default;
An extension of OpenSSH, which is the de facto standard SSH implementation for Linux distributions.
OAUTH is perhaps what you need. It is a way provide authorization with extra restrictions. For instance, imagine that a certain user has read and write permission on a certain service. When you issue an OAUTH authorization you do not necessarily give full user powers to the third party. You may only give read access.
CORS, mentioned in another answer, is useful when the end client (typically a web browser) needs single-sign-on across web sites. But it seems that your problem is closer to a cluster in which you have many microservices that are managed by you. Nevertheless, you can take advantage of solutions developed by the Grid field to ensure security in a cluster distributed across sites (for high availability reasons, for instance).
Complete security is something unattainable. So all this is of no use if credentials are valid forever or if you do not take enough care to keep them secret to whatever received them. For such purpose, I would recommend partitioning your network using layers. Each layer with a different degree of secrecy and exposure to the outside world.
If you do not want the burden to have the required infrastructure to allow for OAUTH, you can either use basic HTTP or create your own tokens.
When using basic HTTP authentication, the client needs to send credentials on each request, therefore eliminating the need to keep session state on the server side for the purpose of authorization.
If you want to create your own mechanism, then change your login requests such that a token is returned as the response to a successful login. Subsequent requests having the same token will act as the basic HTTP authentication with the advantage that this takes place at the application level (in contrast with the framework or app server level in basic HTTP authentication).
Your question is about two independent issues.
Making your service accessible from another origin is easily solved by implementing CORS. For non-browser clients, cross-origin is not an issue at all.
The second problem about service authentication is typically solved using token based authentication.
Any caller of one of your microservices would get an access token from the authorization server or STS for that specific service.
Your client authenticates with the authorization server or STS either through an established session (cookies) or by sending a valid token along with the request.

How to authenticate an application, instead of a user?

In the context of WCF/Web Services/WS-Trust federated security, what are the generally accepted ways to authenticate an application, rather than a user? From what I gather, it seems like certificate authentication would be the way to go, IE generate a certificate specifically for the application. Am I on the right track here? Are there other alternatives to consider?
What you are trying to do is solve the general Digital Rights Management problem, which is an unsolved problem at the moment.
There are a whole host of options for remote attestation that involve trying to hide secrets of some sort (traditional secret keys, or semi-secret behavioural characteristics).
Some simple examples that might deter casual users of your API from working around it:
Include &officialclient=yes in the request
Include &appkey=<some big random key> in the request
Store a secret with the app and use a simple challenge/response: send a random nonce to the app and the app returns HMAC(secret,nonce))
In general however the 'defenders advantage' is quite small - however much effort you put in to try and authenticate that the bit of software talking to you is in fact your software, it isn't going to take your attacker/user much more effort to emulate it. (To break the third example I gave, you don't even need to reverse engineer the official client - the user can just hook up the official client to answer the challenges their own client receives.)
The more robust avenue you can pursue is licencing / legal options. A famous example would be Twitter, who prevent you from knocking up any old client through their API licence terms and conditions - if you created your own (popular) client that pretended to the Twitter API to be the official Twitter client, the assumption is their lawyers would come a-knocking.
If the application is under your control (e.g. your server) then by all means use a certificate.
If this is an application under a user control (desktop) then there is no real way to authenticate the app in a strong way. Even if you use certificate a user can extract it and send messages outside the context of that application.
If this is not a critical secure system you could do something good enough like embedding the certificate inside the application resources. But remember once the application is physically on the user machine every secret inside it can sooner or later be revealed.