Multiple Gateways to handle production and sandbox requests separately - api

I am to configure the gateway in separate environment(production and Sandbox) and I have a Doubt:
https://docs.wso2.com/display/AM200/Maintaining+Separate+Production+and+Sandbox+Gateways#MaintainingSeparateProductionandSandboxGateways-MultipleGatewaystohandleproductionandsandboxrequestsseparately
In the store and publisher configuration I need to configure the <RevokeAPIURL>
In the document https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0#ClusteringAPIManager2.0.0-ConfiguringtheAPIPublisher
<RevokeAPIURL>https://<IP of the Gateway>:8243/revoke</RevokeAPIURL>
How I have the Gateway Production and Sandbox separated what address gateway I have in this configuration?
Thanks a lot.

<RevokeAPIURL> is used by store node to call revoke and token APIs of gateway node, when you (re)generate tokens (by client credential grant type) from store UI.
But in this deployment pattern, there is a kind of limitation that is you have to pick one gateway node and configure that for <RevokeAPIURL> in store node's api-manager.xml.
Fo example, let's say you configured prod-gateway there. So, when you generate keys (either prod or sandbox) from store UI, it will call prod-gateway's revoke and token APIs. Since both gateways are pointing to the same keymanager (or km cluster), token generation should work without a problem.
The only downside is with caching. When you regenerate sandbox keys from store UI, it calls prod-gateway and clears the key cache of that gateway only. Therefore, the sandbox-gateway key cache won't be invalidated. So you will be able to call your sandbox-gateway's APIs with old revoked token for about 15 more minutes until the cache expires.
But, if you don't use Store UI to generate keys (i.e. client credentials grant type), which do not happen in a typical production environment where password grant type is usually used, (and calls gateway's token API for that directly), you won't experience this limitation.

Related

how to make a global environment variable accessible for PODs in a kubernetes cluster

In my company, we have an internal Security Token Service consumed by all web apps to validate the STS token issued by the company central access management server (e.g BigIP/APM). Therefore the same endpoint for token validation REST API has to be repeatedly set as an environment variable in Deployment Configuration for each individual web app (Openshift project). So is an ES256 public key used by each web app for validating JWT token.
I'm wondering if there exists a way to set up a global Environment variable or ConfigMap or anything else in Openshift for these kind of common, shared settings per cluster such that they can be by default accessible for all web apps running in all PODs in the cluster? of coz, each individual Deployment Config should override these default values from the global settings at will.
Nothing built in. You could built that yourself with some webhooks and custom code. Otherwise you need to add the envFrom pointing at a Secret and/or ConfigMap to each pod template and copy that Secret/ConfigMap to all namespaces that needed it (kubed can help with that part at least).
I'm wondering if there exists a way to set up a global Environment variable or ConfigMap or anything else in Openshift for these kind of common, shared
When it comes to Microservices it is a good practice to share nothing and avoid "tight coupling". Its typically not good to have global variables.
This will be difficult when you want to evolve and maintain it. Keys are something you regularly should rotate.
In my company, we have an internal Security Token Service consumed by all web apps to validate the STS token issued by the company central access management server (e.g BigIP/APM).
So is an ES256 public key used by each web app for validating JWT token.
When you receive a JWT token, you should inspect the iss (issuer - the value can be an HTTP URL) claim, and if you trust the issuer, you typically can find an OpenID Connect Discovery endpoint where the issuer publishes Json Web Key Set with keys to validate the token.
With this architecture, you have a central service that issue tokens - and also publish keys to validate them. So no need to distributed them in another way - no shared variables. Now you also have a single place to rotate the token, so it becomes more easy to maintain.

Securing Express API

I'm writing a web app with a separate frontend and backend. The frontend is written in React, and the backend is a node.js server running an Express endpoint. How do I ensure that only my frontend can access the API, and not anyone else? My API URL is exposed in my frontend client side code, so anyone can see that.
I added JWT authentication to my API, but I still need to have an unprotected /login endpoint in order to generate the JWT token, and in order to login to generate the token, I must post both a username and password from my frontend, which other users can see, since it's done from the client side.
What is the proper way of securing an API that is hosted on a separate backend like this, so that only my frontend can access it, in a way where nobody can see what credentials are being used to access the endpoint?
You can't. Your API is on the internet. Anyone can access it. You can require an account and login credentials for the account before allowing access to the API, but once someone has an account and credentials, they can access the API from their own script rather than via your web page. This is how the web works. Not much you can do about it. And credentials being used by the client cannot be hidden. All data that is EVER on the client can be looked at by a hacker on the client. This is the way of the web.
Larger companies will typically monitor their API usage to look for inappropriate use. This includes rate limiting, detecting behaviors and sequences that are not typical of a regular human user. When they detect inappropriate use, they will often disable that action or ban the offending account, either temporarily or permanently. This is also why some pages use techniques to detect if an actual human is individually causing the operation such as reCaptcha. For example, on stack overflow, when editing comments or posts, I often run into rate limiting where it tells me that I have to wait a bit before it will accept my edit.
There is no absolutely secure way to store credentials in a client. The most common scheme for credentials is to require username and password (securely over https) and then when that is accepted on the server as legit credentials, some sort of token is issued to the client which can be used for future API calls. That token may be in a cookie or may need to be manually included with each subsequent API call (the advantage of a cookie when using APIs from a browser is that the cookie is automatically sent with each subsequent request).
If the token is a cookie, then the cookie is stored in the browser's cookie storage and an expiration can be set for it. The browser's cookie storage is protected from access by web pages from other sites, but can be accessed by someone on the local computer (it's stored in the file system).
If the token is not a cookie, just returned as a token, and the client wishes to store it, there are a few other places that Javascript provides access to in order to store it. Local storage has similar security as cookie storage. It is protected from access by other web sites, but can be accessed by a person on the local computer.

How to support secure creation of a user account over an external API via a queued job?

So this question is delving into security and encryption and the problem potentially hasn't been encountered by many. Answers may be theoretical. Let me outline the scenario...
A website frontend is driven via a backend API. The backend has an endpoint handling a generic registration form with username and password. It's using SSL.
The backend API handles registration via an async job queue. The queue does not return responses to the API server. It's a set and forget operation to queue up the registration.
Queued jobs are picked up by workers. The workers take care of creating the user account. These workers need access to the plaintext user password so that they can trigger a third-party API registration call with the password.
So the real crux of the problem is the syncing of the password to the third party API while not revealing it to prying eyes. The queue poses the problem of not having direct access to the plaintext password from global POST data anymore, meaning it needs to be stored in some fashion in the queue.
The queue can easily store the hashed password and copy it directly to the users table. This solution does not allow for syncing of the password with the third party API, however, as it's already encrypted. I toyed with two-way encryption, but am whole-heartedly concerned with leaving the password prone to decryption by an attacker.
Can anybody think of a secure way to handle this scenario of password syncing?
The queue is a requirement and it's assumed that this is readable by anyone with access to the server. The passwords don't necessarily have to be synced; the password for the third-party API could be a derivation of the original so long as there's a secure means to decrypt via the logged in user without supplying their password. This is essentially to simulate Single Sign-On with a third party API that does not support SSO.
There are a few ways to sync passwords:
Both auth stores use reversible encryption so that each system can extract the real values to send to the other system;
Both use the exact same encryption so that you send the encrypted text through and therefore can be understood by both.
One system is the "master" in which the users always authenticate through and the "slave" systems simply receive acknowledgement that the user has logged in. This can take the form of machine generated passwords created by the master for use in account creation on the slaves.
One system is the "master" that all other systems make calls into for account validation. Similar to using LDAP or MyOpenID.
There are certainly issues you can run into with a multi-master password sync'ing such as ensuring password changes are properly replicated when a user changes their password.
In your case, it sounds like the user never directly interfaces with the 3rd party API. If that's accurate, have the users authenticate against your system. Generate the 3rd party API password when needed, store it with their account and auto log them into the other system as necessary. Your primary password can be stored in irreversible encryption; however the 3rd party one would have to use reversible encryption. The Queue would never have to have the initial password and instead would simply generate a new one and store it with the local account.

Authentication, Authorization and Session Management in Traditional Web Apps and APIs

Correct me if I am wrong: In a traditional web application, the browser automatically appends session information into a request to the server, so the server can know who the request comes from. What exactly is appended actually?
However, in a API based app, this information is not sent automatically, so when developing an API, I must check myself if the request comes from an authenticated user for example? How is this normally done?
HTTP Protocol is stateless by design, each request is done separately and is executed in a separate context.
The idea behind session management is to put requests from the same client in the same context. This is done by issuing an identifier by the server and sending it to the client, then the client would save this identifier and resend it in subsequent requests so the server can identify it.
Cookies
In a typical browser-server case; the browser manages a list of key/value pairs, known as cookies, for each domain:
Cookies can be managed by the server (created/modified/deleted) using the Set-Cookie HTTP response header.
Cookies can be accessed by the server (read) by parsing the Cookie HTTP request header.
Web-targeted programming languages/frameworks provide functions to deal with cookies on a higher level, for example, PHP provides setcookie/$_COOKIE to write/read cookies.
Sessions
Back to sessions, In a typical browser-server case (again), server-side session management takes advantage of client-side cookie management. PHP's session management sets a session id cookie and use it to identify subsequent requests.
Web applications API?
Now back to your question; since you'd be the one responsible for designing the API and documenting it, the implementation would be your decision. You basically have to
give the client an identifier, be it via a Set-Cookie HTTP response header, inside the response body (XML/JSON auth response).
have a mechanism to maintain identifier/client association. for example a database table that associates identifier 00112233445566778899aabbccddeeff with client/user #1337.
have the client resend the identifier sent to it at (1.) in all subsequent requests, be it in an HTTP Cookie request header, a ?sid=00112233445566778899aabbccddeeff param(*).
lookup the received identifier, using the mechanism at (2.), check if a valid authentication, and is authorized to do requested operation, and then proceed with the operation on behalf on the auth'd user.
Of course you can build upon existing infrastructure, you can use PHP's session management (that would take care of 1./2. and the authentication part of 4.) in your app, and require that client-side implementation do cookie management(that would take care of 3.), and then you do the rest of your app logic upon that.
(*) Each approach has cons and pros, for example, using a GET request param is easier to implement, but may have security implications, since GET requests are logged. You should use https for critical (all?) applications.
The session management is server responsibility. When session is created, a session token is generated and sent to the client (and stored in a cookie). After that, in the next requests between client and server, the client sends the token (usually) as an HTTP cookie. All session data is stored on the server, the client only stores the token. For example, to start a session in PHP you just need to:
session_start(); // Will create a cookie named PHPSESSID with the session token
After the session is created you can save data on it. For example, if you want to keep a user logged:
// If username and password match, you can just save the user id on the session
$_SESSION['userID'] = 123;
Now you are able to check whether a user is authenticated or not:
if ($_SESSION['userID'])
echo 'user is authenticated';
else
echo 'user isn't authenticated';
If you want, you can create a session only for an authenticated user:
if (verifyAccountInformation($user,$pass)){ // Check user credentials
// Will create a cookie named PHPSESSID with the session token
session_start();
$_SESSION['userID'] = 123;
}
There are numerous way for authentic users, both for Web applications and APIs. There are couple of standards, or you can write your own custom authorization / and or authentication. I would like to point out difference between authorization and authentication. First, application needs to authenticate user(or api client) that request is coming from. Once user has been authenticated, based on user's identity application needs to determine whatever authenticated user has permission to perform certain application (authorization). For the most of traditional web applications, there is no fine granularity in security model, so once the user is authenticated, it's in most cases also and authorized to perform certain action. However, this two concepts (authentication and authorization) should be as two different logical operations.
Further more, in classical web applications, after user has been authenticated and authorized
(mostly by looking up username/password pair in database), authorization and identity info is written in session storage. Session storage does not have to be server side, as most of the answers above suggest, it could also be stored in cookie on client side, encrypted in most cases. For an example, PHP CodeIgniter framework does this by default. There is number of mechanism for protecting session on client side, and I don't see this way of storing session data any less secure than storing sessionId, which is then looked up in session storage on server-side. Also, storing session client-side is quite convenient in distributed environment, because it eliminates need for designing solution (or using already existing one) for central session management on server side.
Further more, authenticating with simple user-password pair does not have to be in all case done trough custom code which looks up matching user-record in database. There is, for example basic authentication protocol , or digest authentication. On proprietary software like Windows platform, there are also ways of authenticating user trough, for an example,ActiveDirectory
Providing username/password pair is not only way to authenticate, if using HTTPS protocol, you can also consider authentication using digital certificates.
In specific use case, if designing web service, which uses SOAP as protocol, there is also WS-Security extension for SOAP protocol.
With all these said, I would say that answers to following question enter decision procedure for choice of authorization/authentication mechanism for WebApi:
1) What's the targeted audience, is it publicly available, or for registered(paying) members only?
2) Is it run or *NIX, or MS platform
3) What number of users is expected
4) How much sensitive data API deals with (stronger vs weaker authentication mechanisms)
5) Is there any SSO service that you could use
.. and many more.
Hope that this clears things bit, as there are many variables in equation.
If the API based APP is a Client, then the API must have option to retrieve/read the cookies from server response stream and store it. For automatic appending of cookies while preparing request object for same server/url. If it is not available, session id cannot be retrieved.
You are right, well the reason things are 'automatic' in a standard environment is because cookies are preferred over URL propagation to keep things pretty for the users. That said, the browser (client software) manages storing and sending the session cookie along with every request.
In the API world, simple systems often just have authentication credentials passed along with every request (at least in my line of work). Client authors are typically (again in my experience) reluctant to implement cookie storage, and transmission with every request and generally anything more than the bare minimum...
There are plenty of other authentication mechanisms out there for HTTP-based APIs, HTTP basic / digest to name a couple, and of course the ubiquitous o-auth which is designed specifically for these things if I'm not mistaken. No cookies are maintained, credentials are part of every exchange (fairly sure on that).
The other thing to consider is what you're going to do w/ the session on the server in an API. The session on a website provides storage for the current user, and typically stores small amounts of data to take load off the db from page to page. In an API context this is less of a need as things are more-or-less stateless, speaking generally of course; it really depends what the service is doing.
I would suggest you send some kind of token with each request.
Dependent on the server and service those can be a JSESSIONID parameter in your GET/POST request or something mature like SAML in SOAP over HTTP in your Web Service request.

Need help understanding WCF Security Architecture

So I've been banging my head against the wall for the past couple days trying to understand how WCF's security architecture worked. I have a goal in mind and I'm not sure that I'm going in the right direction.
The System
We use a combination of Active Directory and databases to manage our authentication and authorization. Client applications typically use their Windows credentials to authenticate and the applications checks against database tables to see if those users are allowed to authenticate and then if they are authorized to use the resources they are requesting. The current setup has each client directly communicating with the database to do these checks.
The Goal
We want to use a Security Token Service to authenticate the client and provide "high level" authorizations for top level resources. The services that provide data or perform actions would operate if the supplied SecurityToken was valid. Additionally, the token, if it did not contain a particular right, would query the token service to see if the user did have rights that were not loaded when the token was initially created. (We have over 300 rights in our database, and that could lead to rather hefty tokens for users with many rights)
What I Don't Understand
1) I understand the token creation process, but I'm a little lost on how the client gets, stores and sends the token to the services it intends to use. Does each "worker" service require a unique token (i.e. call to CalculatorService requires one version of the token and the SaveResultService require a new token to be generated?) Can I manually request, save and send tokens?
2) On the "worker" service side, what is the process by which the token is verified? Does my "worker" service have to contact the Token Service for verification of the token? Or does it just read the token and assume, if it is properly signed, that the token is genuine and operate from that perspective?
3) Is it possible to encrypt my tokens manually and store them on the client side for use while they are valid (thus avoiding authentication attempts on every service call) and so that a web client can save the token between page loads and reuse it on successive calls?
Thanks for helping with my lack of understanding
You should go through the samples for Windows Identity foundation - It providers the classes and implementations required to wrap claims that you can use or query for auth and authz.
http://msdn.microsoft.com/en-us/library/ee517291.aspx
What you are looking for is a durable token cache. - Tokens have lifetimes and usually require renewal and WIF does the renewal under the hood for most scenarios.
You can manually request and attach tokens and pool the proxies using WIF.