Monitoring access to AWS API Gateway resources using api-keys - api

I have built a gateway (using aws api gateway) in front of my rest api. I want to monitor the usage of resources on that api using the api-keys generated by api gateway. By 'usage' I mean which resources were requested and served to clients associated with an api key. Amazon claims that cloudtrail can be used to track gateway requests but the x-api-key header does not show up in cloudtrail logs. Has amazon provided an idiomatic way of doing this? Has anyone implemented this functionality in a custom manner? It seems reasonable that this functionality should be built in, however I cannot find how to do this anywhere.

Related

How to restrict api gateway rest api to CloudFront hosted S3 website

I have hosted a S3 static site into CloudFront. That site using rest api deployed into api gateway. API gateway has not access control.
I want to protect my api from being accessed by others. Only my static site can access it. I know I can use api key but that could expose by browser console which is not expected.
Is there other way to control my api access?
Thanks in advance
I have a similar issue as well. It seems like using referer or CORS restrictions are the best way to go. However, in practice I haven't been able to make it work after trying both CORS and referer restrictions. API Gateway has automatic protection against malicious behavior like DDOS attacks according to their FAQs, but it is disheartening that I haven't found a specific solution for protecting my API gateway that is only used for my S3/Cloudfront static site.
Google Cloud allows you to use their API keys on the frontend for integrations with services like Google Maps. The way they protect those keys is through restricting the API keys to certain domains. Unfortunately, I haven't found similar functionality for AWS keys. As you know, the only way to throttle or put quotas on API gateway is through API keys, so it looks like this would be useless for a static site that can't expose those API keys publicly on the frontend.
It defeats the whole purpose of going completely serverless if I am unable to configure my serverless API Gateway the same way I could congfigure a normal backend EC2 server. For now, I've created billing alarms so I don't get surprised with a huge AWS bill if something goes wrong with my unprotected API gateway.

Should API gateway be coupled/uncoupled with the business logic

We are trying to build an API gateway in front of our application (we may split the application to micro services ASAP), and we meet some problems.
1 - different API types.
There are two kinds APIs in our application, most of them will be used by ourselves(user login/logout, news add/remove), we call them Self-used API here. And some of APIs will be allowed to used by third party, we call them Open API here.
Should all of them get through the gateway?
2 - different authentication
Self-used API may require the user login-ed or have related permissions, the Open API will require the third-party app take a key which we will use to identify and limit the request rate.
Should all kinds of authentication completed in the gateway? If yes, the Self-used api authentication is business related, does it mean that this api gateway can not be shared by other application?
Furthermore, the third-party developer will create their application and get a key back, they can also update/remove the apps(Something like Google API Console).
I am not sure if this should be put in the gateway or another micro-service. IMO, I prefer to put these features in a new service, but the validation and rate limit is done in gateway, that means for each request, gateway will have to query the user, rate limit and other information by the key from the service, this will make the gateway coupled with the business again.
There are quite a few ways of implementing an API Gateway. You can use different endpoints with a single API gateway. Here are a few links that are relevant
Serverless blog "How to deploy multiple micro-services under one API domain with Serverless" https://serverless.com/blog/api-gateway-multiple-services/
Nginx "Do You Really Need Different Kinds of API Gateways? (Hint: No!)" https://www.nginx.com/blog/do-you-really-need-different-kinds-of-api-gateways-hint-no/
Sentialabs.io "Amazon API Gateway types, use cases and performance" https://www.sentialabs.io/2018/09/13/API-Gateway-Types-Compared.html
AWS API Gateway FAQs https://aws.amazon.com/api-gateway/faqs/
Think about the types of features you are trying to accomplish with your approach, and how API Gateway will help you address them.

APIs authentication and JWT token validation with KONG

I plan to use Kong in our project. I'm currently working on a POC to see how we can integrate it in our platform as the main API gateway. I also want to use the JWT plugin for authentication and authorisation. I know that all the API calls should go through the Kong gateway to be authenticated. Then, if the authentication is validated they can go to the API.
Clients ---> Kong gateway ----> Apis
The part that is not very clear in my mind is how the APIs and Kong fit together.
Imagine a scenario where a client try to call directly an API with a token (bypassing the Gateway). How can the API use Kong to validate this token ?
How does Kong authenticates the APIs (not the Client) ? In the examples I have seen so far, only the authentication of the clients is documented, not the authentication of the APIs that are "protected" by Kong.
When using kong as an API Gateway (or for that matter any gateway) we tend to put it at the point where external clients talk to your service. It is a means to discover the individual services. And kong can do good enough job to validate such request.
For the calls you make to other services from within your set of microservices, you may allow for the free passage by means of directly invoking the service. Challenge in that case will be how the services will discover each other. (One way is to rely on DNS entries. We used to do that but later moved to kubernetes and started using their service discovery), and restrict all the incoming traffic to a given service from outside world. So they can only get in via gateway (and thats where we have all the security)
The reason behind the above philosophy is that we trust the services we have created (This may or may not be true for you and if its not then you need to route all your traffic via an api gateway and consider your APIs as just another client and they need to get hold of access token to proceed further or may be have another service discovery for internal traffic)
Or you may write a custom plugin in kong that filters out all the traffic that originates from within your subnet and validates everything else.

Consul vs API Gateway

I would like to ask about the functionalities of Consul and API Gateway. Is Consul can replace API Gateway as a service referrer ?
Or how to use both of them in term of microservices architecture ?
Thank You
Consul is multi datacenter service discovery (+health checking) and distributed K/V store.
API Gateway is a service that handles all the tasks involved in accepting and processing API calls, including traffic management, authorization and access control, monitoring, and API version management.
so they're quite different..
depends on what you're trying to achieve and your current API Gateway use case, you may be able to use Consul + Consul aware load balancers, such as https://github.com/fabiolb/fabio and https://traefik.io/.
At a high-level, an API gateway would become the single point of entry to your micro services. It would allow you to give a consistent user experience to your clients - irrespective of the backend services.
They act as an abstraction - when you hit a /product/{productId} endpoint, you shouldn't need to know about the internal microservices e.g. /reviews, /recommendations etc - the gateway can do this for you and return a single response.
API gateways will be configured to receive a request on a listen path e.g.
curl http://gateway.com/myservice/mypath -H 'Authorization: secret_auth_token'
Internally, the gateway will receive the request and will see that myservice points to a specific api definition.
And based on that auth-token, will be able to establish whether the user is allowed access, what rate limit / quotas and also what upstream targets & paths they are allowed access. A few typical features:
Authentication & Authorisation
Rate Limits
Body Transforms (Filters / Map Reduce / Json -> XML, XML -> Json)
Header Injection
Json Schema Validation
Method Transforms
Mock Responses
API Versioning Strategy
Send requests to multiple targets
the list goes on.
So the gateway will then proxy the request to myservice.com/mypath for example and return the response to the client.
Now let's assume you want your upstream to be highly available - e.g. you may have myservice1.com and myservice2.com.
The gateway can be configured to load balance requests between these services. And you could use features of the gateway for testing the health of the upstreams, but there are also dedicated tools for this. One such tool is Consul.
API gateways should be able to integrate with service discovery tools. So let's assume myservice1.com goes down for maintenance, the gateway will know never to send traffic there and to only send to service2.com till service1 comes back up.
Screenshot below is example of tyk.io api gateway integration support for Consul.

Security considerations for API Gateway clustering?

Clients that communicate against a single point of entry via an API Gateway over HTTPS against a RESTful API
API Gateway: API Keys for tracking and analytics, oAuth for API platform authentication
User Micro service provides user authentication and authorization, generates JWT that is signed and encrypted (JWS,JWE)
Other micro services determine permissions based on claims inside JWT
Micro services communicate internally via PUB/SUB using JWT in the message and other info. Each micro service could be scaled out with multiple instances (cluster with a load balancer).
Question: Can I cluster the the API Gateway and have the load balancer in front of it. What do I need to consider with respect to managing authentication? ie: sharing of API Keys across the API Gateway cluster?
Extra notes, I'm planning on terminating SSL at the gateway and the use of bcrypt for passwords in the db.
Any feedback would be great, thank you.
Can I cluster the the API Gateway and have the load balancer in front
of it.
Yes, you can. Most of the good Api Gateway solutions will provide the ability to do clustering. e.g. https://getkong.org/docs/0.9.x/clustering/ or you can use cloud based Api Gateway: Azure API Management or AWS API Gateway
What do I need to consider with respect to managing authentication?
These specifics depends on your selection of API Gateway solution.