Is using SAML bearer tokens for authenticating users to backend services a bad idea? - authentication

Suppose I have a front-end application that wants to fetch some data from a back-end service. (I do.) The service will need to verify that the end-user is authenticated, that it is authorized to use the service and possibly filter the returned data based on the user's privileges. In my case, both the front-end app and the back-end service relies on Azure ACS for authentication.
Ideally the front-end would like to act on the behalf of the authenticated user, which sounds like a good fit for using an ActAs token (as specified in WS-Trust). However, it turns out that ACS does not currently support ActAs.
A workaround could be to use the actual bearer token (the bootstrap token in the front-end app) to authenticate to the back-end service. It's not hard to do, but would it be a bad idea for some reason?

From your front-end app, you could certainly pass along the identity data of the end user by either sending the token as is or sending the attributes from it. Both have issues. For the former, if it's also encrypted, the front- and back-ends will have to share the private key needed to decrypt it; they will also have to share audience restrictions, etc. in order for the back-end to consider the token valid for it. In other words, the front- and back-ends will be ONE relying party, not two. Might not be a problem, but be aware. In the latter case, you end up sending user data in a proprietay way which could increase integration and maintenance costs over time. In both cases, you can authenticate the front-end app to the back-end using some other type of credential, e.g., a certificate used at the transport level and, thus, forming a trusted subsystem between them.
One thing that I would suggest you consider instead is OAuth 2. From this blog post, it seems to me that ACS supports it (though I don't have any first hand experience w/ it). The truely wonderful thing about OAuth 2 is that it bakes delegation in, and is NO WHERE near as complex as ActAs in WS-Trust. The net result is the same, i.e., the back-end service will have info about the calling service and the end user, but the amount of effort to get it setup in incomparable. The tokens will still be bearer tokens, but you can mitigate that to a degree by using SSL. Beyond SSL, you can put some additional measures in place, but the best, IMO, would be if Microsoft did something in ACS like Google has done w/ their Access Tokens for service accounts which uses asymmetric keys that are chained up to a PKI. (BTW, for all I know, Microsoft may have already done something like that; if so, you're set.)
Anyway, HTH!

Related

Securely using JSON web tokens to programmatically authenticate a user from one system into another

My team and I have been working on a web application for our clients that uses JSON web tokens for authentication and authorization. Using Azure AD as our identity provider, we verify a user's identity and generate a signed JWT with the user's permissions in it. The JWT then gets included in the authorization header of all subsequent requests to the APIs. Pretty standard stuff as far as JWTs go.
We're now being asked to provide the capability to link directly into our system from another third-party web application without forcing the user to reauthenticate. I'm trying to figure out if there's a way to do so without creating a massive security loophole.
The way I picture this working would be to implement an endpoint for programmatic authentication in our system that accepts a cryptographically signed payload with an API key and the user's ID or email address. The third-party system would have a private key with which to sign the payload, and we'd have a public one to verify the signature. If the request is legitimate, we'd issue a token for the specified user, and they could use that to link to whatever they like.
I'm already getting yelled at by at least one person that this is a complete joke from a security standpoint because, among other things, it completely bypasses AAD authentication. I believe the third-party system in question does use AAD for authentication, but that's not really relevant either way because we're trusting them implicitly whether they've authenticated their users or not. Either way I take his point.
I'm not a security expert and I don't claim to know whether there even is a proper way to do this kind of thing, but from my vantage it doesn't really seem all that much less secure than any other mechanism of authentication and authorization using JWTs. Is that true? Are we nuts for even trying? Is there a way to do it that's more secure? What should I know about this that I demonstrably don't already?
Thanks in advance for the help. At the very least I hope this spurs some helpful conversation.
Single Sign-On (SSO) enables users to enter their credentials once to sign in and establish a session which can be reused across multiple applications without requiring to authenticate again. This provides a seamless experience to the user and reduces the repeated prompts for credentials.
Azure AD provides SSO capabilities to applications by setting a session cookie when the user authenticates the first time. The MSAL.js library allows applications to leverage this in a few ways.
MSAL relies on the session cookie to provide SSO for the user between different applications.
Read more in this documentation.

Microservice Authentication strategy

I'm having a hard time choosing a decent/secure authentication strategy for a microservice architecture. The only SO post I found on the topic is this one: Single Sign-On in Microservice Architecture
My idea here is to have in each service (eg. authentication, messaging, notification, profile etc.) a unique reference to each user (quite logically then his user_id) and the possibility to get the current user's id if logged in.
From my researches, I see there are two possible strategies:
1. Shared architecture
In this strategy, the authentication app is one service among other. But each service must be able to make the conversion session_id => user_id so it must be dead simple. That's why I thought of Redis, that would store the key:value session_id:user_id.
2. Firewall architecture
In this strategy, session storage doesn't really matter, as it is only handled by the authenticating app. Then the user_id can be forwarded to other services. I thought of Rails + Devise (+ Redis or mem-cached, or cookie storage, etc.) but there are tons of possibilities. The only thing that matter is that Service X will never need to authenticate the user.
How do those two solutions compare in terms of:
security
robustness
scalability
ease of use
Or maybe you would suggest another solution I haven't mentioned in here?
I like the solution #1 better but haven't found much default implementation that would secure me in the fact that I'm going in the right direction.
Based on what I understand, a good way to resolve it is by using the OAuth 2 protocol (you can find a little more information about it on http://oauth.net/2/)
When your user logs into your application they will get a token and with this token they will be able to send to other services to identify them in the request.
Example of Chained Microservice Design
Resources:
http://presos.dsyer.com/decks/microservice-security.html
https://github.com/intridea/oauth2
https://spring.io/guides/tutorials/spring-security-and-angular-js/
Short answer : Use Oauth2.0 kind token based authentication, which can be used in any type of applications like a webapp or mobile app. The sequence of steps involved for a web application would be then to
authenticate against ID provider
keep the access token in cookie
access the pages in webapp
call the services
Diagram below depicts the components which would be needed. Such an architecture separating the web and data apis will give a good scalability, resilience and stability
You can avoid storing session info in the backend by using JWT tokens.
Here's how it could look like using OAuth 2.0 & OpenID Connect. I'm also adding username & password login to the answer as I assume most people add it as a login option too.
Here are the suggested components of the solution:
Account-service: a microservice responsible for user creation & authentication. can have endpoints for Google, Facebook and/or regular username & password authentication endpoints - login, register.
On register - meaning via register endpoint or first google/fb login, we can store info about the user in the DB.
After the user successfully logs in using either of the options, on the server side we create a JWT token with relevant user data, like userID. To avoid tampering, we sign it using a token secret we define(that's a string).
This token should be returned as httpOnly cookie alongside the login response. It is recommended that it's https only too for security. This token would be the ID token, with regards to the OpenID connect specification.
Client side web application: receives the signed JWT as httpOnly cookie, which means this data is not accessible to javascript code, and is recommended from a security standpoint. When sending subsequent requests to the server or to other microservices, we attach the cookie to the request(in axios it would mean to use withCredentials: true).
Microservices that need to authenticate the user by the token:
These services verify the signature of the JWT token, and read it using the same secret provided to sign the token. then they can access the data stored on the token, like the userID, and fetch the DB for additional info about the user, or do whichever other logic. Note - this is not intended for use as authorization, but for authentication. for that, we have refresh token & access token, which are out of scope of the question.
I have recently created a detailed guide specifically about this subject, in case it helps someone: https://www.aspecto.io/blog/microservices-authentication-strategies-theory-to-practice/
One more architecture perspective is to use nuget-package (library) which actually do authentication/token validaton. Nuget-package will be consumed by each microservice.
One more benefit is that there is no code duplication.
you can use idenitty server 4 for authentication and authorisation purpose
you must use Firewall Architecture hence you have more control over secutiry , robustness ,scalability and ease of use

What's the benefit of OAuth for securing REST APIs?

I want to make a web application that's a Single-Page client that interacts with a REST API in the server. I need to authenticate users of my app as opposed to authenticate third party-apps (the latter being the focus of most tradional REST bibliography).
After googling a lot, I found there are many options (Basic HTTP Auth, HTTP digest, OAuth, etc) and several desirable properties one might get depending on the one chosen. For example, Basic Auth is simple but sends plain passwords unencrypted, which is not a good idea unless you guarantee that your app will run under TLS. Digest on the contrary doesn't send the credentials on every request, but prevents strong password encryption and is vulnerable to man in the middle attacks[1]. Meteor introduced SRP which avoids storing and sending passwords[2].
It appears to me that the consensus is to use OAuth, particulary the OAuth2 credentials flow, since I want to authorize access to my resources on my own server[3][4][5]. What I don't get is what are the benefits of this particular approach. I do get the benefits of using OAuth as a form of delegate authentication, much like those of using OpenID for federated authentication: you don't handle authentication data at all in your server. But in the case you apply the credentials flow for authorization (or OAuth1 2-legged flow for that matter), not introducing a third party, it looks like you still have to handle authentication by some other means, like HTTP basic or digest. So if you're doing that why not stick to that only method, and send the credentials on every request, instead of the token?
It's just to reduce the amount of requests where you have to actually send the credentials? It's just to stick to the OAuth convention? Those don't sound like strong arguments over the other methods. So, I'm I missing some other aspects or did I misunderstood something?
If you are not federating, there is not really a good case for using OAuth.
If you just want to authenticate to your own service, basic or forms authentication is the way to go. The catch, as you've pointed out, is that you must use HTTPS. However, that applies to all authentication methods.
As long as you're using HTTPS, you can leave protection of credentials while in transit to the transport level security. That's what it's there for and (for the most part) that's what it's good at. If you're using plain HTTP (anywhere in your application, not just for authentication), you're done. There are all manner of very clever MitM attacks that totally break the security of any system that employs HTTP anywhere (Moxie Marlinspike gave an interesting presentation on the subject at Black Hat back in 2009).

Should HTTP Basic Authentication be used for client or user API authentication?

A typical recommendation for securing a REST API is to use HTTP Basic Authentication over SSL. My question is, should HTTP Basic Authentication only be used to authenticate the client (ie. the app accessing the API), or can it also be used to authenticate the user (the consumer of the app)?
It seems most APIs have to deal with both, as almost all web services employ some sort of user accounts. Just consider Twitter or Vimeo—there are public resources, and there are private (user specific) resources.
It seems logical that a simple REST API could do both client and user authentication at the same time using using HTTP Basic Authentication (over SSL).
Is this a good design?
By authenticate the client you probably mean the usage of API Key, this mechanism is used to track the concrete application/client. The second thing is that it gives you the possibility to disable the application by disabling the key, for example when client's author removes his account from the service. If you want to make your API public then it is a good idea.
But you need to remember that it gives you no real protection, everybody can download the client and extract that key.
I would not recommend to use Basic Authentication for API authentication. When it comes to authentication then you should consider that the application (client) developer has to implement its side of the authentication, too. Part of that is not only authentication itself but also how to get credentials and even much more than that.
I recommend to make use of an established authentication standard that ships with client libraries for the most popular programming languages. Those libraries make it much more likely that developers are going to adapt your API, because they reduce implementation effort on the client side.
Another important reason for using authentication standards is that they make developers (and others) more confident in the security of your authentication system. Those standards have been audited by experts and their weaknesses and strengths are well known and documented. It is unlikely that you are going to develop a nearly as solid authentication flow unless you are a security expert :-).
The most established standard in this field is OAuth but you can find alternatives by searching for "oauth alternatives".
How does OAuth help you with your problem setting?
In OAuth 2, the application client has to obtain an access token for a user before accessing any protected resource. To get an access token, the application must authenticate itself with its application credentials. Depending on the use-case (e.g. 3rd party, mobile) this is done in different ways that are defined by the OAuth standard.
An access token should not only represent a user but also which operations may be used on what resources (permissions). A user may grant different permissions to different applications so this information must somehow be linked to the token.
How to achieve such a semantic for access tokens however is not part of OAuth - it just defines the flow of how to obtain access tokens. Therefor, the implementation of the access token semantic is usually application specific.
You can implement such token semantic by storing a link between an access tokens and its permissions in your backend when you create the access token. The permissions may either be stored for every user-application combination or just for every application, depending on how fine-granular you want things to be.
Then, each time that an access token is processed by the API, you fetch this information and check whether the user has sufficient permissions to access the resource and to perform the desired operation.
Another option is to put the permission information into the access token and to sign or encrypt the token. When you receive the access token, you verify or decrypt it and use the permissions that are stored in the access token to make your decision. You may want to have a look on Json Web Tokens (JWT) on how to accomplish that.
The benefit of the later solution is better scalability and less effort during backend implementation. The downside of it are potentially larger requests (especially with RSA encryption) and less control over tokens.

REST API Token-based Authentication

I'm developing a REST API that requires authentication. Because the authentication itself occurs via an external webservice over HTTP, I reasoned that we would dispense tokens to avoid repeatedly calling the authentication service. Which brings me neatly to my first question:
Is this really any better than just requiring clients to use HTTP Basic Auth on each request and caching calls to the authentication service server-side?
The Basic Auth solution has the advantage of not requiring a full round-trip to the server before requests for content can begin. Tokens can potentially be more flexible in scope (i.e. only grant rights to particular resources or actions), but that seems more appropriate to the OAuth context than my simpler use case.
Currently tokens are acquired like this:
curl -X POST localhost/token --data "api_key=81169d80...
&verifier=2f5ae51a...
&timestamp=1234567
&user=foo
&pass=bar"
The api_key, timestamp and verifier are required by all requests. The "verifier" is returned by:
sha1(timestamp + api_key + shared_secret)
My intention is to only allow calls from known parties, and to prevent calls from being reused verbatim.
Is this good enough? Underkill? Overkill?
With a token in hand, clients can acquire resources:
curl localhost/posts?api_key=81169d80...
&verifier=81169d80...
&token=9fUyas64...
&timestamp=1234567
For the simplest call possible, this seems kind of horribly verbose. Considering the shared_secret will wind up being embedded in (at minimum) an iOS application, from which I would assume it can be extracted, is this even offering anything beyond a false sense of security?
Let me seperate up everything and solve approach each problem in isolation:
Authentication
For authentication, baseauth has the advantage that it is a mature solution on the protocol level. This means a lot of "might crop up later" problems are already solved for you. For example, with BaseAuth, user agents know the password is a password so they don't cache it.
Auth server load
If you dispense a token to the user instead of caching the authentication on your server, you are still doing the same thing: Caching authentication information. The only difference is that you are turning the responsibility for the caching to the user. This seems like unnecessary labor for the user with no gains, so I recommend to handle this transparently on your server as you suggested.
Transmission Security
If can use an SSL connection, that's all there is to it, the connection is secure*. To prevent accidental multiple execution, you can filter multiple urls or ask users to include a random component ("nonce") in the URL.
url = username:key#myhost.com/api/call/nonce
If that is not possible, and the transmitted information is not secret, I recommend securing the request with a hash, as you suggested in the token approach. Since the hash provides the security, you could instruct your users to provide the hash as the baseauth password. For improved robustness, I recommend using a random string instead of the timestamp as a "nonce" to prevent replay attacks (two legit requests could be made during the same second). Instead of providing seperate "shared secret" and "api key" fields, you can simply use the api key as shared secret, and then use a salt that doesn't change to prevent rainbow table attacks. The username field seems like a good place to put the nonce too, since it is part of the auth. So now you have a clean call like this:
nonce = generate_secure_password(length: 16);
one_time_key = nonce + '-' + sha1(nonce+salt+shared_key);
url = username:one_time_key#myhost.com/api/call
It is true that this is a bit laborious. This is because you aren't using a protocol level solution (like SSL). So it might be a good idea to provide some kind of SDK to users so at least they don't have to go through it themselves. If you need to do it this way, I find the security level appropriate (just-right-kill).
Secure secret storage
It depends who you are trying to thwart. If you are preventing people with access to the user's phone from using your REST service in the user's name, then it would be a good idea to find some kind of keyring API on the target OS and have the SDK (or the implementor) store the key there. If that's not possible, you can at least make it a bit harder to get the secret by encrypting it, and storing the encrypted data and the encryption key in seperate places.
If you are trying to keep other software vendors from getting your API key to prevent the development of alternate clients, only the encrypt-and-store-seperately approach almost works. This is whitebox crypto, and to date, no one has come up with a truly secure solution to problems of this class. The least you can do is still issue a single key for each user so you can ban abused keys.
(*) EDIT: SSL connections should no longer be considered secure without taking additional steps to verify them.
A pure RESTful API should use the underlying protocol standard features:
For HTTP, the RESTful API should comply with existing HTTP standard headers. Adding a new HTTP header violates the REST principles. Do not re-invent the wheel, use all the standard features in HTTP/1.1 standards - including status response codes, headers, and so on. RESTFul web services should leverage and rely upon the HTTP standards.
RESTful services MUST be STATELESS. Any tricks, such as token based authentication that attempts to remember the state of previous REST requests on the server violates the REST principles. Again, this is a MUST; that is, if you web server saves any request/response context related information on the server in attempt to establish any sort of session on the server, then your web service is NOT Stateless. And if it is NOT stateless it is NOT RESTFul.
Bottom-line: For authentication/authorization purposes you should use HTTP standard authorization header. That is, you should add the HTTP authorization / authentication header in each subsequent request that needs to be authenticated. The REST API should follow the HTTP Authentication Scheme standards.The specifics of how this header should be formatted are defined in the RFC 2616 HTTP 1.1 standards – section 14.8 Authorization of RFC 2616, and in the RFC 2617 HTTP Authentication: Basic and Digest Access Authentication.
I have developed a RESTful service for the Cisco Prime Performance Manager application. Search Google for the REST API document that I wrote for that application for more details about RESTFul API compliance here. In that implementation, I have chosen to use HTTP "Basic" Authorization scheme. - check out version 1.5 or above of that REST API document, and search for authorization in the document.
In the web a stateful protocol is based on having a temporary token that is exchanged between a browser and a server (via cookie header or URI rewriting) on every request. That token is usually created on the server end, and it is a piece of opaque data that has a certain time-to-live, and it has the sole purpose of identifying a specific web user agent. That is, the token is temporary, and becomes a STATE that the web server has to maintain on behalf of a client user agent during the duration of that conversation. Therefore, the communication using a token in this way is STATEFUL. And if the conversation between client and server is STATEFUL it is not RESTful.
The username/password (sent on the Authorization header) is usually persisted on the database with the intent of identifying a user. Sometimes the user could mean another application; however, the username/password is NEVER intended to identify a specific web client user agent. The conversation between a web agent and server based on using the username/password in the Authorization header (following the HTTP Basic Authorization) is STATELESS because the web server front-end is not creating or maintaining any STATE information whatsoever on behalf of a specific web client user agent. And based on my understanding of REST, the protocol states clearly that the conversation between clients and server should be STATELESS. Therefore, if we want to have a true RESTful service we should use username/password (Refer to RFC mentioned in my previous post) in the Authorization header for every single call, NOT a sension kind of token (e.g. Session tokens created in web servers, OAuth tokens created in authorization servers, and so on).
I understand that several called REST providers are using tokens like OAuth1 or OAuth2 accept-tokens to be be passed as "Authorization: Bearer " in HTTP headers. However, it appears to me that using those tokens for RESTful services would violate the true STATELESS meaning that REST embraces; because those tokens are temporary piece of data created/maintained on the server side to identify a specific web client user agent for the valid duration of a that web client/server conversation. Therefore, any service that is using those OAuth1/2 tokens should not be called REST if we want to stick to the TRUE meaning of a STATELESS protocol.
Rubens