Securely exposing a gRPC service with authentication using gRPC-web - authentication

We are using Improbable's gRPC-Web library to expose a gRPC service (implemented in Go) to a Javascript client running in-browser. This service will sit alongside an existing front-end Go service which hosts a REST-based API. The existing service uses session-based authentication to authenticate its users (session cookies + XSRF protection with double-submit cookies, which are also verified using some per-session server-side state).
The front-end Go service hosts various API endpoints which are either handled locally or fulfilled by proxying the request to other services. All endpoints are exposed via a Gin middleware handler chain, which implements the aforementioned session authentication and XSRF protection checks. It has been proposed that we host gRPC-Web's gogrpcproxy component behind this existing middleware to expose our gRPC service to the world.
I am interested in ensuring the approach for authenticating the incoming gRPC-Web requests is secure. The following methods have been proposed:
Token-based authentication – i.e. passing bearer tokens in the gRPC request metadata, which are verified by the back-end gRPC service. This matches the authentication model by which native gRPC calls would be authenticated if gRPC-Web was not involved.
In this model, gRPC-Web's responsibility is the implementation of the transport abstraction between browser and server, and marshalling requests to/from the native gRPC representation; authentication is delegated to the backing gRPC service. The gRPC-Web proxy is hosted as a separate endpoint external to the existing REST API.
Session-based authentication – re-use of the existing session authentication middleware. In this model, the grpcweb proxy server is hosted behind the Gin handler chain. Gin performs its usual checks to verify existence of the relevant cookies and XSRF headers prior to admitting the request.
This approach re-uses much of the existing authentication logic. However, it requires the passing of the XSRF header to ensure the request is admitted by the Gin middleware. This is possible in the current implementation by setting request metadata, which is (currently) implemented by setting headers on the outbound HTTP request. However, it is unclear to me whether this:
is appropriate, as it appears to be a layer violation by exploiting the implementation detail that metadata is currently passed as HTTP headers. This is not documented and could conceivably change;
is compatible with gRPC-Web's websocket transport, which does not appear to propagate metadata into headers, as the websocket transport is dialled prior to any requests being transmitted;
suffers potential security issues in future, as the long-lived gRPC-Web transport connection to the front-end service is only authenticated by the front-end proxy when first established, rather than continually on every request (unless the gRPC service also validates the request metadata).
My understanding is that gRPC-Web seeks to emulate the gRPC transport between a browser and server, so accordingly implements no specific authentication logic. The standard gRPC mechanisms for passing authentication details make no allowance for implicit session-based state, so my preference is the token-based approach.
Is this a reasonable analysis of the available options?

Related

How to use authorization in Gateway for a .NET microservice based app using Ocelot

We have a .NET microservice based app where the Gateway is built using Ocelot. Until now we didn't do any authentication in the Gateway, the frontend calls an Authentication Provider service which responds with an JWT token, the token gets added to request headers and then, the new requests go through gateway and each particular microservice is concerned with authentication and authorization.
We also have API Key based authentication in place, but it's not used until now.
I added a new microservice with authentication done by API Key and I want to handle authorization in the Gateway. That means the gateway should check the claims based on JWT token and if claims matches forward the request to the microservice using an API key header.
How can I do it with Ocelot, instead of writing controllers and actions for each corresponding microservice controllers and actions? I thought about implementing Delegating Handlers to take care of it, but maybe there is a better way?
A clean way to do this, is to have the access token between the client and the API gateway and to then use the token exchange flow between the gateway and the underlying APIs so as to keep a potential attack surface on the initial access token small and avoid exposing internal mechanics (e.g. multiple audiences of underlying APIs in your initial access token, multiple api scopes).
There are many sources of information about this online. Here's one to get you started.

Switch authorization mechanism in Apigee

My architecture considers an ApiGateway (ApiGee) that dispatches requests to a service behind.
All APIs (http and rest) have an authorization mechanism. So, the client needs to authenticate itself to an IAM before operate with my APIs.
In order to avoid bad security design, all services behind the ApiGateway accept authenticated requests.
For (technical and business) requirements reasons, I need to use internal/external different authentication mechanisms: when a client sends a request, that has to contains the authorization information provided by IAM1. Instead, the services behind accept authenticated APIs that contains Auth information that came from IAM2.
For the translation, I create a service 'translation-token-service' that exposes an API for implementing the token translation.
My desiderata is to "link" ApiGee to the 'translation-token-service' in order to let the clients to continue to use the authorization from IAM1 and the behind services to continue to use the authorization information from IAM2: when a request arrives, ApiGee should invoke the 'translation-token-service' for having a token with IAM2 authorization information, injects them into the request and dispatches it to the right behind service.
NB: the token conversation made by ApiGee should be in some how cached (I don't care about the storage type). So if a client makes two call with the same token, the first invocation asks to convert the token, the second one doesn't, for performance reason.
NB: IAM1 is not oidc compliant, IAM2 does.
How can I implement this flow?

Microservices - managing external system auth-token

Lets say I have 2 Microservices (customer and payment), both consume APIs of external system (e.g. Stripe).
API Authentication
Assume that before consuming any business API of Stripe, API Consumer (in my case Customer & Payment Service) has to first authentication itself using API Keys (AppId and secret).
Stripe provides access token which must be passed into HTTP header into subsequent API calls to Stripe.
below can be possible approaches,
Approach1
https://drive.google.com/file/d/1BGn-hiNwZT4u3BIBmEv-HkJC0w0dk5CB/view?usp=sharing
Approach2: https://drive.google.com/file/d/1JA1hFq7l7-4Ow3b32XNyb2co4tqxKZQ6/view?usp=sharing
Approach1
Multiple auth token though Stripe account is single (per service instance)
Each service to manage auth token expiration/renewal
Approach2
Single auth-token exists with all services.
dependency on auth service.
auth token expiration/renewal managed by single service (Auth Service)
would like know which should be best fit in Microservice architecture? Any other suggestion?
Approach 2 is slightly more scalable and maintainable if more services will require API access to external APIs.
However the correct implementation would be an egress gateway for all your external API calls.
If your going to spend the time to build an Auth service, you might as well go all the way and centralize your external API routing as well.
Benefits:
Single internal endpoint for external APIs, reduces duplication.
Handles all authn and authz with external APIs for your services.
Centralizes all logging, auditing, disaster recover, load balancing etc....
Most gateway products like kong can be used for egress as well.

How does wcf implement its authentication?

I've been using wcf for a while and its authentication mechanisms, Windows, UserName/Password, Client certificate for a while.
I'd like to better understand how WCF uses these authentication mechanisms internally when creating SOAP messages and sending them.
Specifically, are the authentication credentials passed by wcf in every SOAP request, or does it only pass the authentication credentials in the first request and then some kind of token is issued and passed back and forth during subsequent sessions?
Are these authentication credentials (username+password, windows, client certificate) passed in a different manner depending on whether the security mode is transport or message? Is it that in message mode, the authentication credentials are inside the SOAP message, while in the transfer mode, http headers are other transport protocol specific are used to pass the authentication credentials?
Lets just assume that the SOAP message is secured using https when Transport mode is used and encrypted when using Message Mode and not worry about message privacy or tampering for this question.
You've asked several big qeustions, but I'll try to answer the question about sessions.
Session and authentication handling depend on the binding you're using. If you're using basichttpbinding, for instance, the host basically acts like a web server and no persistant "sessions" are created; as a result each SOAP request you send must contain everything necessary for authentication on the host. However, there are some bindings available like WSHTTPBinding that allow for the creation of security and reliability sessions that persist after the initial authentication using a token.
Wrapping the message in SSL should prevent problems.

REST API Token-based Authentication

I'm developing a REST API that requires authentication. Because the authentication itself occurs via an external webservice over HTTP, I reasoned that we would dispense tokens to avoid repeatedly calling the authentication service. Which brings me neatly to my first question:
Is this really any better than just requiring clients to use HTTP Basic Auth on each request and caching calls to the authentication service server-side?
The Basic Auth solution has the advantage of not requiring a full round-trip to the server before requests for content can begin. Tokens can potentially be more flexible in scope (i.e. only grant rights to particular resources or actions), but that seems more appropriate to the OAuth context than my simpler use case.
Currently tokens are acquired like this:
curl -X POST localhost/token --data "api_key=81169d80...
&verifier=2f5ae51a...
&timestamp=1234567
&user=foo
&pass=bar"
The api_key, timestamp and verifier are required by all requests. The "verifier" is returned by:
sha1(timestamp + api_key + shared_secret)
My intention is to only allow calls from known parties, and to prevent calls from being reused verbatim.
Is this good enough? Underkill? Overkill?
With a token in hand, clients can acquire resources:
curl localhost/posts?api_key=81169d80...
&verifier=81169d80...
&token=9fUyas64...
&timestamp=1234567
For the simplest call possible, this seems kind of horribly verbose. Considering the shared_secret will wind up being embedded in (at minimum) an iOS application, from which I would assume it can be extracted, is this even offering anything beyond a false sense of security?
Let me seperate up everything and solve approach each problem in isolation:
Authentication
For authentication, baseauth has the advantage that it is a mature solution on the protocol level. This means a lot of "might crop up later" problems are already solved for you. For example, with BaseAuth, user agents know the password is a password so they don't cache it.
Auth server load
If you dispense a token to the user instead of caching the authentication on your server, you are still doing the same thing: Caching authentication information. The only difference is that you are turning the responsibility for the caching to the user. This seems like unnecessary labor for the user with no gains, so I recommend to handle this transparently on your server as you suggested.
Transmission Security
If can use an SSL connection, that's all there is to it, the connection is secure*. To prevent accidental multiple execution, you can filter multiple urls or ask users to include a random component ("nonce") in the URL.
url = username:key#myhost.com/api/call/nonce
If that is not possible, and the transmitted information is not secret, I recommend securing the request with a hash, as you suggested in the token approach. Since the hash provides the security, you could instruct your users to provide the hash as the baseauth password. For improved robustness, I recommend using a random string instead of the timestamp as a "nonce" to prevent replay attacks (two legit requests could be made during the same second). Instead of providing seperate "shared secret" and "api key" fields, you can simply use the api key as shared secret, and then use a salt that doesn't change to prevent rainbow table attacks. The username field seems like a good place to put the nonce too, since it is part of the auth. So now you have a clean call like this:
nonce = generate_secure_password(length: 16);
one_time_key = nonce + '-' + sha1(nonce+salt+shared_key);
url = username:one_time_key#myhost.com/api/call
It is true that this is a bit laborious. This is because you aren't using a protocol level solution (like SSL). So it might be a good idea to provide some kind of SDK to users so at least they don't have to go through it themselves. If you need to do it this way, I find the security level appropriate (just-right-kill).
Secure secret storage
It depends who you are trying to thwart. If you are preventing people with access to the user's phone from using your REST service in the user's name, then it would be a good idea to find some kind of keyring API on the target OS and have the SDK (or the implementor) store the key there. If that's not possible, you can at least make it a bit harder to get the secret by encrypting it, and storing the encrypted data and the encryption key in seperate places.
If you are trying to keep other software vendors from getting your API key to prevent the development of alternate clients, only the encrypt-and-store-seperately approach almost works. This is whitebox crypto, and to date, no one has come up with a truly secure solution to problems of this class. The least you can do is still issue a single key for each user so you can ban abused keys.
(*) EDIT: SSL connections should no longer be considered secure without taking additional steps to verify them.
A pure RESTful API should use the underlying protocol standard features:
For HTTP, the RESTful API should comply with existing HTTP standard headers. Adding a new HTTP header violates the REST principles. Do not re-invent the wheel, use all the standard features in HTTP/1.1 standards - including status response codes, headers, and so on. RESTFul web services should leverage and rely upon the HTTP standards.
RESTful services MUST be STATELESS. Any tricks, such as token based authentication that attempts to remember the state of previous REST requests on the server violates the REST principles. Again, this is a MUST; that is, if you web server saves any request/response context related information on the server in attempt to establish any sort of session on the server, then your web service is NOT Stateless. And if it is NOT stateless it is NOT RESTFul.
Bottom-line: For authentication/authorization purposes you should use HTTP standard authorization header. That is, you should add the HTTP authorization / authentication header in each subsequent request that needs to be authenticated. The REST API should follow the HTTP Authentication Scheme standards.The specifics of how this header should be formatted are defined in the RFC 2616 HTTP 1.1 standards – section 14.8 Authorization of RFC 2616, and in the RFC 2617 HTTP Authentication: Basic and Digest Access Authentication.
I have developed a RESTful service for the Cisco Prime Performance Manager application. Search Google for the REST API document that I wrote for that application for more details about RESTFul API compliance here. In that implementation, I have chosen to use HTTP "Basic" Authorization scheme. - check out version 1.5 or above of that REST API document, and search for authorization in the document.
In the web a stateful protocol is based on having a temporary token that is exchanged between a browser and a server (via cookie header or URI rewriting) on every request. That token is usually created on the server end, and it is a piece of opaque data that has a certain time-to-live, and it has the sole purpose of identifying a specific web user agent. That is, the token is temporary, and becomes a STATE that the web server has to maintain on behalf of a client user agent during the duration of that conversation. Therefore, the communication using a token in this way is STATEFUL. And if the conversation between client and server is STATEFUL it is not RESTful.
The username/password (sent on the Authorization header) is usually persisted on the database with the intent of identifying a user. Sometimes the user could mean another application; however, the username/password is NEVER intended to identify a specific web client user agent. The conversation between a web agent and server based on using the username/password in the Authorization header (following the HTTP Basic Authorization) is STATELESS because the web server front-end is not creating or maintaining any STATE information whatsoever on behalf of a specific web client user agent. And based on my understanding of REST, the protocol states clearly that the conversation between clients and server should be STATELESS. Therefore, if we want to have a true RESTful service we should use username/password (Refer to RFC mentioned in my previous post) in the Authorization header for every single call, NOT a sension kind of token (e.g. Session tokens created in web servers, OAuth tokens created in authorization servers, and so on).
I understand that several called REST providers are using tokens like OAuth1 or OAuth2 accept-tokens to be be passed as "Authorization: Bearer " in HTTP headers. However, it appears to me that using those tokens for RESTful services would violate the true STATELESS meaning that REST embraces; because those tokens are temporary piece of data created/maintained on the server side to identify a specific web client user agent for the valid duration of a that web client/server conversation. Therefore, any service that is using those OAuth1/2 tokens should not be called REST if we want to stick to the TRUE meaning of a STATELESS protocol.
Rubens