How to use tokens in 3-tier web-app using WCF - wcf

Good day. I've looked all over for an example of how to do this, and while I have found all sorts of useful info on how to implement bits of it, the overall solution still eludes me.
I have a 3-tier web-based application (Presentation tier is Web Forms, business/DAL tier is a WCF web service, DB is Oracle) for which I want to implement an Authorization/Authentication mechanism.
My thought was to use the Enterprise Library Securty block to generate a token in the WCF service (and cache it there), and send the token id back to the web-app server. The token id would be sent back to the client browser (via a cookie) and then all subsequent requests back to the WCF service I would pass this token id in the header of the request message. I would then use some of the WCF extensibility interfaces to check for authentication by looking for the id in the cache. I was also going to cache the roles (I'm just using simple role based access) with the token so that I had in-memory access to the roles list for the user and I could avoid a DB round-trip for every access check.
Does this part make sense as to the right way to go? If so, here is the second part.
Now my problem is how to manage role access and session management from the webserver hosting the presentation tier. I'm managing the roles from the business layer, but I also need access to them in the presentation layer because I also wanted to use role-based access to each page via the web.configs. How do I do this? should I also pass the roles back to this layer? There is something smelly about both the service and the webapp having to store versions of this rolelist.
Any assistance would be much appreciated!

Related

Microservices - IPC authentication/authorization

We're trying to figure out a best practice for IPC authentication and authorization. I'll explain.
We have a micro-services based architecture SaaS with a dedicated service for authentication. This service is responsible for doing the authentication and managing auth tokens (JWTs).
Everything works perfectly good with users that login and start to consume resources from the different services.
The question now is how to authentication and authorize requests which being initiated by other services (without the context of a specific user)?
Should we generate a dedicated user per service and treat it like
any other user in the system (with appropriate permissions)?
Should
we have a "hard coded"/dynamic token deployed among the services?
Any other ideas?
Our biggest concern is such tokens/passwords will be compromised at some point since requests from one service to another is treated with high level of permissions.
Cheers,
I'm not a microservices expert, just started to get my feet wet in the microservices world. From what I've read until now, this could be handled in many ways, one of which as you mentioned is hard coding api-keys so that services recognise one another. But I never liked this idea personally - also using a user per service like you mentioned. A solution I really liked is using Oauth2 for handling these scenarios - an interesting implementation I found is Gluu Server and I think client credentials grant type is what you're looking for - refer https://gluu.org/docs/integrate/oauth2grants/.
Have fun :)
Normally, API Gateway is integral part of any MS system.
All the services encapsulated and should be not accessible without API Gateway.
Such encapsulation allows direct communication between the services, without providing the requester payload, which should be required if the request comes straight from API Gateway.
In that case the request threated as something different, and follows different logic/middleware pipeline. No additional special users needed.

Microservices - how to solve security and user authentication?

There is a lot of discussion about microservice architecture. What I am missing - or maybe what I did not yet understand is, how to solve the issue of security and user authentication?
For example: I develop a microservice which provides a Rest Service interface to a workflow engine. The engine is based on JEE and runs on application servers like GlassFish or Wildfly.
One of the core concepts of the workflow engine is, that each call is user centric. This means depending of the role and access level of the current user, the workflow engine produces individual results (e.g. a user-centric tasklist or processing an open task which depends on the users role in the process).
In my eyes, thus a service is not accessible from everywhere. For example if someone plans to implement a modern Ajax based JavaScript application which should use the workflow microservice there are two problems:
1) to avoid the cross-scripting problem from JavaScript/Ajax the JavaScript Web application needs to be deployed under the same domain as the microservice runs
2) if the microservice forces a user authentication (which is the case in my scenario) the application need to provide a transparent authentication mechanism.
The situation becomes more complex if the client need to access more than one user-centric microservices forcing user authentication.
I always end up with an architecture where all services and the client application running on the same application server under the same domain.
How can these problems be solved? What is the best practice for such an architecture?
Short answer: check OAUTH, and manage caches of credentials in each microservice that needs to access other microservices. By "manage" I mean, be careful with security. Specially, mind who can access those credentials and let the network topology be your friend. Create a DMZ layer and other internal layers reflecting the dependency graph of your microservices.
Long answer, keep reading. Your question is a good one because there is no simple silver bullet to do what you need although your problem is quite recurrent.
As with everything related with microservices that I saw so far, nothing is really new. Whenever you need to have a distributed system doing things on behalf of a certain user, you need distributed credentials to enable such solution. This is true since mainframe times. There is no way to violate that.
Auto SSH is, in a sense, such a thing. Perhaps it may sound like a glorified way to describe something simple, but in the end, it enables processes in one machine to use services in another machine.
In the Grid world, the Globus Toolkit, for instance, bases its distributed security using the following:
X.509 certificates;
MyProxy - manages a repository of credentials and helps you define a chain of certificate authorities up to finding the root one, which should be trusted by default;
An extension of OpenSSH, which is the de facto standard SSH implementation for Linux distributions.
OAUTH is perhaps what you need. It is a way provide authorization with extra restrictions. For instance, imagine that a certain user has read and write permission on a certain service. When you issue an OAUTH authorization you do not necessarily give full user powers to the third party. You may only give read access.
CORS, mentioned in another answer, is useful when the end client (typically a web browser) needs single-sign-on across web sites. But it seems that your problem is closer to a cluster in which you have many microservices that are managed by you. Nevertheless, you can take advantage of solutions developed by the Grid field to ensure security in a cluster distributed across sites (for high availability reasons, for instance).
Complete security is something unattainable. So all this is of no use if credentials are valid forever or if you do not take enough care to keep them secret to whatever received them. For such purpose, I would recommend partitioning your network using layers. Each layer with a different degree of secrecy and exposure to the outside world.
If you do not want the burden to have the required infrastructure to allow for OAUTH, you can either use basic HTTP or create your own tokens.
When using basic HTTP authentication, the client needs to send credentials on each request, therefore eliminating the need to keep session state on the server side for the purpose of authorization.
If you want to create your own mechanism, then change your login requests such that a token is returned as the response to a successful login. Subsequent requests having the same token will act as the basic HTTP authentication with the advantage that this takes place at the application level (in contrast with the framework or app server level in basic HTTP authentication).
Your question is about two independent issues.
Making your service accessible from another origin is easily solved by implementing CORS. For non-browser clients, cross-origin is not an issue at all.
The second problem about service authentication is typically solved using token based authentication.
Any caller of one of your microservices would get an access token from the authorization server or STS for that specific service.
Your client authenticates with the authorization server or STS either through an established session (cookies) or by sending a valid token along with the request.

Web API authentication/authorization using SSO instead of OAUTH - will it work?

Updated based on questions from #user18044 below
If a user is authenticated in two different web applications via 2 different SAML-based identity providers, and one of the applications needs to request data from a web API exposed by the other application, would it be possible to call the web API methods securely by virtue of the user's current authenticated status in both applications without separately securing the API methods via an API level authentication protocol such as OAUTH? Note that both applications are owned and operated by my company and share the same 2nd level domains and user base, even though the identity servers are different (one is legacy).
Some further information: Application A is a portal application that is going to host widgets using data supplied from Application B. Application A will only communicate with application B via a web API exposed by application B. Currently application B does not expose a web API (except internally to the application itself). This is new functionality that will need to be added to application B. Application A will use Okta as its SSO. Our lead architect's proposal is to continue to use a custom legacy IDP server that we developed internally based around using the dk.nita.saml20 DLL. They are both SAML based I believe, but I don't think they could share the same identity token without some retrofitting. But this is hitting the limits of my knowledge on the topic of authentication. :) I think our architect's plan was to have the user authenticate separately using the two different identity providers and then only secure the web API using CORS, his reasoning being that since the user is already known and authenticated to use application B, that there wouldn't be any security implications in allowing application A to call application B's web api methods, as the user should be authenticated in application B. This seems quirky to me, in that I can imagine a lot of browser redirects happening that might not be transparent to the user, but other than that, I'm just trying to figure out where the security holes might lie, because it feels to me that there would be some.
I know that this approach would not be considered a best practice, however with that being said, I really want to understand why not. Are there security implications? Would it even work? And if so, are there any "gotchas" or things to consider during implementation?
To reiterate, our lead architect is proposing this solution, and it is failing my gut check, but I don't know enough on the topic to be able to justify my position or else to feel comfortable enough to accept his. Hoping some security experts out there could enlighten me.
It's hard to answer without knowing more on how your current applications and APIs are secured exactly. Do the web application and its API have the same relying party identifier (i.e. can the same token be used to authenticate against both)?
If both web applications use the WS-Federation protocol to authenticate users, then most likely the SAML token will be stored in cookies that were set when the identity provider posted the token back to the application.
You do not have access to these cookies from JavaScript. If the web API that belongs to application B uses the same cookie based authentication mechanism, you could use this provided you allow for cross origin resource sharing.
If your web API uses something like a bearer token authentication scheme (like OAuth) or has a different relying party id in the STS, this would obviously not work.
I think the reason this fails your gut check is because you are basically accessing the web API in a way a cross-site request forgery attack would do it.
A problem I see with this approach is that if the user is not authenticated with the other web application, then the call to your API will also fail.
I agree with user18044 as far as it being based on a cross-site request forgery attack and the security between applications. Is it true that if User X has access to App A, that they will have access to App B and vice versa? If that is not the case, then each application will need to be authenticated separately...and it won't be a SSO. I found these links that might be helpful in your situation.
https://stackoverflow.com/questions/5583460/how-to-implement-secure-single-sign-on-across-various-web-apps
https://developer.salesforce.com/page/Implementing_Single_Sign-On_Across_Multiple_Organizations

WCF using 2 Authentication Methods With Windows Identity Foundation

I'm working on a WCF project that will be our new service layer.
These services will be called by 2 separate clients, the first of which is a WPF application and the other is an ASP.Net web application. The WPF client will be run by internal users and will authenticate with the service via domain authentication and run under the context of that user. The other will be used by external users and needs to authenticate using some separate mechanism then impersonate a "WebUser" account on our domain.
I'm reading a bit about Windows Identity Foundation and it sounds like this might be a good fit. Am I right in thinking I could have 2 token services, one for domain authentication and one for something like ASP.Net membership authentication (Or some similar equivalent) and have each client get it's token from the relevant STS and pass that along to the WCF service?
I'm assuming there is an STS I can use out of the box for domain authentication, but will I have to implement the second one myself to authenticate web users? I can't find a lot of information on this.
Am I thinking along the right lines or should I just be creating duel endpoints for each service each with a different authentication mechanism? Or should I be doing something completely different?
Thanks
The big advantage of using Claims-Based authentication / WIF is that both the task of authenticating the user AND the administration of the user's properties are moved way from the applications to the STS/Identity provider.
You are developing a service layer but the true benefits of using WIF will be for the applications written on top of your layer. The WPF application will no longer need to connect to the AD and fetch the user's groups to figure out what they are allowed to do. The groups will already be visible as claims in the token the user/WIF provides.
The web application (is it just one web application or more?) will no longer need the ASP.Net Membership database with accompanying user administration. This functionality gets moved to the STS.
There is a cost. (There always is, somehow...) Claims-Based authentication has a rather steep learning curve. It takes a while for the quarter to drop for all people involved.
So the answer to your question depends on what kind of users the web application(s?) built upon your service layer have and how many. And how much they wish to know about them. Can you perhaps trust Google / Facebook / Windows Live for authentication? Are the users already in an existing database within your domain? How much work will it take to maintain the user directories? Do your marketing people wish to send them emails regularly? Et cetera.
This is probably not just for the service layer's developers to decide, but something to discuss with people in the rest of your organisation.
If the benefits are not particularly big, the alternative is to simply keep these responsibilities at the web application's server. Each web application will have a good old ASP.Net membership database, it'll authenticate the user all by itself. When asking queries from the service layer, it'll present its web server certificate plus specify the user's name and type.
If the benefits are big enough, you can in principle use ADFS 2.0 for everything. It can also store external users nowadays and it's free if you already have Active Directory. Or the ThinkTecture 2.0 server that Ross recommends. It's easier to customize and perhaps your systems administrators and security folks will not be too enthusiastic about opening the firewall to the ADFS server.
Microsoft has some good reads on WIF, in particular an Overview of Claims-Based Architecture.
You should take a look at identity server as it can indeed handle this scenario.
The person who leads the project above has a great pluralsight video on this exact scenario! You need to sign up to watch it, but they offer a free trial.
Basically you get a token from the identity provider (windows ADFS for the internal client, and what ever you decide for the external users). You will give this token to the federated gateway (identity server probably, but it could be Azure ACS). This will return an authentication token that you can then use with your service.

Need help in implementing WCF authentication and authorization in internet context

I need to create a wcf service which would be consume by silverlight apps downloaded through internet. Basically the users are not part of any windows domain but there credentials and roles are maintained in database.
The wcf service should be internet enabled, but methods cannot be accessed anonymously.
Authorization should also be supported\
User and roles tables are not as per ASP.NET membership schema
Developers should not be constrained to have IIS installed and certificate configured
Authorized users should be able to access only his related information. He should be able to delete or update his related companies but not others.
To achieve 1 & 2, i had followed below link.
WCF security by Robbin Cremers
To cover point 3, i have provided custom implementation of MembershipProvider and RoleProvider classes and overridden methods ValidateUser and IsUserInRole respectively to fetch from my own schema user and roles table.
So far, so good authentication and authorization works fine.
Now the problem, the developers can't have IIS installed and certificates configured, so need to disable this in development mode. Hence have implemented custom CodeAccessSecurityAttribute class which would check development mode or production and use custom IPermission or PrincipalPermission.
[Question 1]My question is I don't anywhere people recommend this approach, so litte afraid whether this is the right approach or any better approach is there to handle this situation.
[Question 2] Lastly related to point 5, do i need to send some kind of token over? What is the best approach for this?
[Question 3] Performance impact in Robbin Cremers method, since for every service call, two extra database calls will be made in "ValidateUser" and "IsUserInRole" to authenticate and authorize. Is there a better way?
Sorry for the big question.
As far as I can tell from your scenario description, once you've created and hooked up your custom Membership / Role providers, you don't really need Message Security. Instead, the standard approach described in http://msdn.microsoft.com/en-us/library/dd560702(v=vs.95).aspx will work just fine (or http://msdn.microsoft.com/en-us/library/dd560704(v=vs.95).aspx if you want users to log in via your Silverlight app instead of via a regular web page). The browser then handles the sending of any required tokens automatically (via cookies), and I'm guessing ASP.NET is smart about caching authentication/authorization results so the overhead should not be incurred on every call.