How to Secure A Cloud Run gRPC Service - authentication

This article explains how to handle authentication from an end-user with Identity Platform.
The crux seems to be that the client should authenticate with Identity Platform to get a token. That's straightforward enough and I've been able to retrieve the token from the client side code. The server side should receive the token from the client in a request header. But the article doesn't seem to explain what to do after this point. We can get the user with the Identity Platform SDK, but what if the token is invalid? Should we just throw an exception so that the gRPC call errors out?
There is a Java sample and you can that is what it does here. In the sample it returns a Forbidden 403 HTTP status.
But, my assumption is that Cloud Run would have a more automatic level of integration than this. This requires the Cloud Run gateway to send a request to the gRPC service, and get the response. Theoretically, that would allow a malicious actor to continuously hit the gateway with spam tokens that could potentially cost money. If we simply return an error, how are we protected from malicious actors pounding our services? Does the gateway automatically block the IP address if the gRPC service returns too many errors? How does it know which errors should trigger this? A HTTP error of 403 could alert the gateway that the endpoint is getting attacked, but what about gRPC?

Part I.
So lets clarify some facts to realize why you need another layer.
Cloud Run is just the HTTP service. And as you mentioned, if you let the traffic hit it, you will pay all the traffic, that's why you need another layer before "that's designed" for a specific purpose. There are other layers that can be placed before like Cloud Armor, Load Balancer, Identity-Aware-Proxy. Those are standalone products, with their own docs/config, and their own pay per use model.
Part II.
Also look into API Gateway for gRPC, you can use the API management capabilities of API Gateway to add monitoring, hosting, tracing, authentication, and more to your gRPC services on Cloud Run.
In addition, once you specify special mapping rules, API Gateway translates RESTful JSON over HTTP into gRPC requests. This means that you can deploy a gRPC server managed by API Gateway and call its API using a gRPC or JSON/HTTP client, giving you much more flexibility and ease of integration with other systems.
You can create gRPC services for API Gateway in any gRPC-supported language.

Related

Validating requests in gateway service and other services

In our implementation, all incoming requests to the API-gateway service are validated using the JWT token and are routed to the corresponding service. But no protection is given for the other service endpoints, ie someone who knows the URL can easily access it directly.
What is the best way to approach this scenario? Do we really need to add request validation at the service level too?
If the "other service endpoints" are exposed through the API gateway to the Internet, then you are right. Anyone can guess the URLs and access them directly. If you're not exposing them to the outside world then you risk that someone who manages to get into your internal network will be able to freely call those services (it can be an attacker, but it can also be a malicious actor from your organisation).
You should always make sure that sensitive endpoints are properly protected, regardless of the fact if they are exposed to the outside world or not.
The services should validate all incoming requests, even the ones that are coming from the API gateway. The API gateway should make a coarse-grained validation of credentials, e.g., check whether the token exists, has a valid signature, and isn't expired. The service should then validate again the signature and expiration, and make fine-grained validation. E.g., check whether the subject of the token has permission to access the given data.
The service should not blindly trust incoming requests only because those endpoints sit behind an API gateway. Someone might find a loophole in your API gateway and the validation made at the service level will give you another level of protection.

How to authenticate user in microservice architecture with Lumen

I'm new to microservice architecture. I was reading about it and start to be interested in developing website using the architecture. I've used Lumen micro framework.
What I am going to ask you has been browsed on the internet and I couldn't find the way. So, I finally reached out to stackoverflow. Below is the overview of my current implementation.
Up until this point, I am able to request user, patient, treatment, etc.. data from the api gateway and get the response data properly.
When client requests user data like name, department, client requests this route, http://localhost:8000/users/1, (port 8000 is for api gateway and 8001 for user service, let's say) and gateway goes to 8001 and grab the user data.
I've also enabled the authorization between api gateway and individual services in order to prevent separately perform CRUD operatons to the individual services - when request goes from gateway to service, I have put the pregenerated token (which is also predefined in the service) in the header and when it reaches the service, the service validates if the token is equal by comparing its predefined one. So, it's working.
But to be able to request from api gateway to services, I've used client credentials grant type. So, here is my question.
How can I implement the login and register? Does client credentials
grant type enable to do so? If not, what is the appropriate one? What
is the right way to implement the system? Could you please kindly explain in
advance? Thank you so much.
Updated
In conclusion, I want to know how to configure authentication between front-end and api gateway.
Your API architecture looks good - nothing there needs to change. However there are 3 parts to the architecture:
APIs (done)
UIs (to do)
Authorization Server (maybe use a free cloud one?)
As a next step maybe focus on login. My tutorial will help you to understand the interaction and what needs to be coded in UIs. Or if you prefer just view the message workflow.
Registering users can be a more complex topic and depends on the type of system. Happy to answer follow up questions if it helps.

APIs authentication and JWT token validation with KONG

I plan to use Kong in our project. I'm currently working on a POC to see how we can integrate it in our platform as the main API gateway. I also want to use the JWT plugin for authentication and authorisation. I know that all the API calls should go through the Kong gateway to be authenticated. Then, if the authentication is validated they can go to the API.
Clients ---> Kong gateway ----> Apis
The part that is not very clear in my mind is how the APIs and Kong fit together.
Imagine a scenario where a client try to call directly an API with a token (bypassing the Gateway). How can the API use Kong to validate this token ?
How does Kong authenticates the APIs (not the Client) ? In the examples I have seen so far, only the authentication of the clients is documented, not the authentication of the APIs that are "protected" by Kong.
When using kong as an API Gateway (or for that matter any gateway) we tend to put it at the point where external clients talk to your service. It is a means to discover the individual services. And kong can do good enough job to validate such request.
For the calls you make to other services from within your set of microservices, you may allow for the free passage by means of directly invoking the service. Challenge in that case will be how the services will discover each other. (One way is to rely on DNS entries. We used to do that but later moved to kubernetes and started using their service discovery), and restrict all the incoming traffic to a given service from outside world. So they can only get in via gateway (and thats where we have all the security)
The reason behind the above philosophy is that we trust the services we have created (This may or may not be true for you and if its not then you need to route all your traffic via an api gateway and consider your APIs as just another client and they need to get hold of access token to proceed further or may be have another service discovery for internal traffic)
Or you may write a custom plugin in kong that filters out all the traffic that originates from within your subnet and validates everything else.

how do i handle security within my microservice architecture?

In my webapp architecture i have an api gateway which proxies requests to my microservices, also there is a a common microservice which other microservices can query via rest api. All of these run on node servers.
i want the microservices to only be approachable from the api gateway, besides the common server which can also be approachable from the other microservices. what is the best network architecture to make this happen and do i need to handle authentication between the servers in some way?
Security needs to be handled at multiple layers and as such its a really broad topic. I will however share some pointers which you can explore further.
First thing first any security comes at a cost. And it's a trade off that you need to do.
If you can ensure that services are available only to the other services and API gateway, then you can delegate application layer security to API gateway and strip the security headers at API gateway itself and continue to have free communication between services. It is like creating restricted zone with ip restrictions (or other means on from where can service be accessed), and api gateway or reverse proxy handling all the external traffic. This will allow you to concentrate on few services as far as security is concerned. Point that you should note here is that you will be losing on authorization part as well but you can retain it if you want to.
If you are using AWS you need to look into security groups and VPN etc to set up a secure layer.
A part of security is also to ensure the service is accessible all the time and is not susceptible to DDOS. API gateways do have a means of safeguarding against such threats.
For the ‘API gateway’ front-end authentication you could use OATH2 and for the back-end part you can use OpenID connect which will allow you to use a key value that is relevant to the user, like for example a uuid and use this to set access control at the Microservice level, behind the API Gateway.
You can find in the next link further information about OpenID connect authentication.

Consul vs API Gateway

I would like to ask about the functionalities of Consul and API Gateway. Is Consul can replace API Gateway as a service referrer ?
Or how to use both of them in term of microservices architecture ?
Thank You
Consul is multi datacenter service discovery (+health checking) and distributed K/V store.
API Gateway is a service that handles all the tasks involved in accepting and processing API calls, including traffic management, authorization and access control, monitoring, and API version management.
so they're quite different..
depends on what you're trying to achieve and your current API Gateway use case, you may be able to use Consul + Consul aware load balancers, such as https://github.com/fabiolb/fabio and https://traefik.io/.
At a high-level, an API gateway would become the single point of entry to your micro services. It would allow you to give a consistent user experience to your clients - irrespective of the backend services.
They act as an abstraction - when you hit a /product/{productId} endpoint, you shouldn't need to know about the internal microservices e.g. /reviews, /recommendations etc - the gateway can do this for you and return a single response.
API gateways will be configured to receive a request on a listen path e.g.
curl http://gateway.com/myservice/mypath -H 'Authorization: secret_auth_token'
Internally, the gateway will receive the request and will see that myservice points to a specific api definition.
And based on that auth-token, will be able to establish whether the user is allowed access, what rate limit / quotas and also what upstream targets & paths they are allowed access. A few typical features:
Authentication & Authorisation
Rate Limits
Body Transforms (Filters / Map Reduce / Json -> XML, XML -> Json)
Header Injection
Json Schema Validation
Method Transforms
Mock Responses
API Versioning Strategy
Send requests to multiple targets
the list goes on.
So the gateway will then proxy the request to myservice.com/mypath for example and return the response to the client.
Now let's assume you want your upstream to be highly available - e.g. you may have myservice1.com and myservice2.com.
The gateway can be configured to load balance requests between these services. And you could use features of the gateway for testing the health of the upstreams, but there are also dedicated tools for this. One such tool is Consul.
API gateways should be able to integrate with service discovery tools. So let's assume myservice1.com goes down for maintenance, the gateway will know never to send traffic there and to only send to service2.com till service1 comes back up.
Screenshot below is example of tyk.io api gateway integration support for Consul.