Limiting Access to API Gateway (and AWS Lambda) in a package - api

We have a package that we share with out customers. In the package, we have a chunk of code that does HTTP Request callouts to our central API Gateway. As of now, our API Gateway is open and accepts requests from everywhere, which is not good. I want to limit access to our users who would be using our software. The only solution I have found is using IAM and providing authorization that would require us to include our Access Keys in the package. Our users can install our package in any environment they want and we have no control over that environment. So I think a viable option is to create a generic user policy with minimal access to allow our users to call our API Gateway. However, putting access key in the code doesn't seem like a good idea. Another option is to provider our customers with access keys but that also has overhead. What is a better alternative that is more secure and easy to maintain?

You can use built-in API Gateway API Key functionality when IAM policies aren't possible.
So long as your clients could be on any infrastructure, versus limited to AWS, the API Gateway service provides a generic API key solution, which allows you to restrict client traffic to your API Gateway by enforcing that client requests include API keys. This API key interface is part of their "API Usage Plan" feature.
This document explains how to use the console to set up an API Gateway to enforce that client traffic bears an API key:
To set up API keys, do the following:
Configure API methods to require an API key.
Create or import an API key for the API in a region.
Your clients can implement a "secret storage" solution, in order to avoid putting their API keys into their source code.
For sure it isn't wise for your clients to store their API Keys plain-text inside their source code. Instead, they could use a secret storage solution, to store the API keys outside of their codebase, but still give their applications access to the secret.
This article describes an example solution for secure secret storage (e.g. secure API key storage) which grants an application access to the application secret without putting the unencrypted secret into the source code. It uses Amazon KMS + Cryptex, but the same principle can be applied with other technologies: http://technologyadvice.github.io/lock-up-your-customer-accounts-give-away-the-key/

Related

Is Api Keys authentication sufficient for Google reCAPTCHA Enterprise?

There are primarily two ways to authenticate using Google's reCAPTCHA Enterprise in a non-Google cloud environment (we use AWS). Google recommends using Service Accounts along with their Java client. However, this approach strikes me as less preferable than the second way Google suggests we can can authenticate, namely, using Api Keys. Authenticating via an Api Key is easy, and it’s similar to how we commonly integrate with other 3rd party services, except rather than a username and password that we must secure, we have an Api Key that we must secure. Authenticating using Service Accounts, however, requires that we store a Json file on each environment in which we run reCAPTCHA, create an environment variable (GOOGLE_APPLICATION_CREDENTIALS) to point to that file, and then use Google's provided Java client (which leverages the environment variable/Json file) to request reCAPTCHA resources.
If we opt to leverage Api Key authentication, we can use Google’s REST Api along with our preferred Http Client (Akka-Http). We merely include the Api Key in a header that is encrypted as part of TLS in transit. However, if we opt for the Service Accounts method, we must use Google’s Java client, which not only adds a new dependency to our service, but also requires its own execution context because it uses blocking i/o.
My view is that unless there is something that I’m overlooking--something that the Service Accounts approach provides that an encrypted Api Key does not--then we should just encrypt an Api Key per environment.
Our Site Key is locked down by our domain, and our Api Key would be encrypted in our source code. Obviously, if someone were to gain access to our unencrypted Api Key, they could perform assessments using our account, but not only would it be exceedingly difficult for an attacker to retrieve our Api Key, the scenario simply doesn't strike me as likely. Performing free reCAPTCHA assessments does not strike me as among the things that a sophisticated attacker would be inclined to do. Am I missing something? I suppose my question is why would we go through the trouble of creating a service account, using the (inferior) Java client, storing a Json file on each pod, create an environment variable, etc. What does that provide us that the Api Key option does not? Does it open up some functionality that I'm overlooking?
I've successfully used Api Keys and it seems to work fine. I have not yet attempted to use Service Accounts as it requires a number of things that I'm disinclined to do. But my worry is that I'm neglecting some security vulnerability of the Api Keys.
After poring a bit more over the documentation, it would seem that there are only two reasons why you'd want to explicitly choose API key-based authentication over an Oauth flow with dedicated service accounts:
Your server's execution environment does not support Oauth flows
You're migrating from reCAPTCHA (non-enterprise) to reCAPTCHA Enterprise and want to ease migration
Beyond that, it seems the choice really comes down to considerations like your organization's security posture and approved authentication patterns. The choice does also materially affect things like how the credentials themselves are provisioned & managed, so if your org happens to already have a robust set of policies in place for the creation and maintenance of service accounts, it'd probably behoove you to go that route.
Google does mention sparingly in the docs that their preferred method for authentication for reCAPTCHA Enterprise is via service accounts, but they also don't give a concrete rationale anywhere I saw.

Making API only available for a specific web-application

Let's imagine the following scenario: web-app that calls an API.
As the web app code can be accessed in the web-browser, we could say that anyone can replicate its code and start making calls to our API and we couldn't detect if it came from our web-app or not.
How do I prevent it?
Non-public REST services must perform access control at each API endpoint. Web services in monolithic applications implement this by means of user authentication, authorisation logic and session management. This has several drawbacks for modern architectures which compose multiple micro services following the RESTful style.
In order to minimise latency and reduce coupling between services, the access control decision should be taken locally by REST endpoints
User authentication should be centralised in a Identity Provider (IdP), which issues access tokens
You could also use an API Key. API keys can reduce the impact of denial-of-service attacks. However, when they are issued to third-party clients, they are relatively easy to compromise (if you aren't planning on doing this, there shouldn't be any problems with key hijacking).
Require API keys for every request to the protected endpoint.
Return 429 "Too Many Requests" HTTP response code if requests are coming in too quickly.
Revoke the API key if the client violates the usage agreement.
Do not rely exclusively on API keys to protect sensitive, critical or high-value resources.
You can read more good security practises on OWASP's official article.
I hope this helps.

API security in Azure best practice

I'm developing a web API that will be called by other web apps in the same Azure host and also other 3rd party services/ app. I'm currently looking into API Apps and API management, but there are several things unclear for me regarding security implementation:
Does API App need to have authentication when implemented with API management? If yes, what are the options? This link http://www.kefalidis.me/2015/06/taking-advantage-of-api-management-for-api-apps/ mentions "Keep in mind that it’s not necessary to have authentication on the API App, as you can enable authentication on API Management and let it handle all the details." So that means having the API App authentication to public anonymous? But then someone who knows the direct URL of the API App can access it directly.
What is the best way to implement API Management security? The one mentioned in the tutorial (Having a raw subscription key passed in the header) seems to be prone to man in the middle attack
What advantages does API App add instead of implementing with normal Web API project?
Thanks in advance.
I can answer from API Management perspective. To secure the connection between API Mgmt and your backend (sometimes called last-mile security), there are a few options:
Basic Authentication: this is the simplest solution
Mutual certificate authentication: https://azure.microsoft.com/en-us/documentation/articles/api-management-howto-mutual-certificates/ - this is the most common approach.
IP Whitelisting: if you have a Standard or Premium tier APIM instance, the IP address of the proxy will remain constant. Thus you can configure firewall rules to block unknown IP addresses.
JWT token: if your backend has the capability to validate JWT tokens, you can block any callers without a valid JWT.
This video might also be helpful: https://channel9.msdn.com/Blogs/AzureApiMgmt/Last-mile-Security
I think the document meant you can do the JWT token validation in APIM. However, to prevent someone calling your backend directly, you'll have to implement one of the options mentioned above in your Api Apps

How do I use API keys, and token schemes effectively to secure REST API?

I know this question has been asked a lot of times in varying shapes and forms, but I'm still quite unclear about a few things. It's confusing how resources about this topic around the Web refer to "user" without clear context. API keys are issued for users, and so are access tokens, but I don't think API key users, are the same as access token users.
Do you need to have different API keys for different instances of end user clients? For example if I build a mobile app for a third party API, does each instance keep their own API key? I don't think they do, but how do I tie API keys, and access tokens together to say that a certain request comes from this particular instance of an app authorized by a known user? If I were the auth provider, do I have to keep track of each of those?
API keys, and access tokens are usually represented by a pair of public, and shared keys. As a service provider (server side), which one do I use to verify the message I receive, the API key, or the access token? If I understand correctly, the idea is that each request should come with a signature derived from the secret part of the API key so that the server can check that it comes from a trusted client. Now what use do I have for access token secret? I know the access token is used to verify that a system user has authorized the app to carry operations on their behalf, but which part of the message does the access token secret be useful for?
Is a hash generated from a (secured) random number, and a time stamp salt a good API key generation strategy?
Are there (preferably open source, Java-based) frameworks that do most of these?
Let me try to answer as many of your queries as I can.
Apikey vs Access Token usage
First of all, apikeys are not used per user. Apikeys are assigned per
application (of a developer). A developer of a service signs up their application and obtains a
pair of keys.
On the other hand access tokens are issued for each
end-user in context of the usage (exception is Client Credential
grant).
Service providers can identify the application from the
apikey in use.
Service providers can identify the end-users using
access token attributes.
You should have any end-user APIs, that is an API that has end-user resources (data or context) associated, protected by 3 legged Oauth. So access token should be necessary for accessing those resources.
Developer-only resources can be protected by apikey or two-legged Oauth. Here I am referring to Oauth2 standards.
Oauth1 is preferred when there is no HTTPS is supported. This way the shared secret is not sent over unprotected channel. Instead it is used to generate a signature. I strongly suggest Oauth2 over HTTPs and avoid Oauth1 for ease of use. You and your API consumers would find Oauth2 to be much more simpler to implement and work with. Unless you have a specific reason to use Oauth v1
As a service provider you can use Apigee's Edge platform that provides Oauth 1 and 2. It is not opensource. However you can use it for free, until you need some high TPS or higher SLAs.

OAuth design for API without users permission

I am developing an API that will be used by users of my customers. Here is what the flow will look like:
User of my cloud based service creates an API key.
User embeds the API key into their own custom applications.
User deploys the application to their own end users.
The application talks to our API.
I am looking for advice on how to secure this API. I see a few issues:
API key has to be embedded into the users application and is therefore vulnerable to being stolen and abused.
Once an API key is compromised, it can easily be disabled, but how will my users update their applications to use a new API key short of having to rebuild the application and redeploy.
Does anyone have any ideas on how to design this?
I may be mistaken but maybe you could have your customers' users talk to your customers' APIs instead. Basically, your customers would keep their secret key on their servers, and not embed them in the clients they give their users, so it couldn't be hijecked (unless their server was compromised of course). Then the users would talk to your API through your customers' APIs.
It would be slower and need more work on the part of your customers, but also safer.
Two solutions that I can see to this, although I'm sure there are more..
Use oauth's RSA signature method, and implement a secure certification exchange of keys using your "cloud based service" as the exchange mechanism (or a public cert provider).
Implement a service that allows clients to "renew" their consumer key/secret automatically, but then secure that mechanism using RSA or some other public key encryption method.
Both of these are not easy, and would require your user's applications to "phone home" in order to update their consumer keys.
In the future I think OAuth 2 will provide at least protocol definitions for things like this, but for now, if you're using OAuth 1.0a, what you want to do doesn't really fit into the spec very well (i.e. you have to design much of it yourself.)