Need help in implementing WCF authentication and authorization in internet context - wcf

I need to create a wcf service which would be consume by silverlight apps downloaded through internet. Basically the users are not part of any windows domain but there credentials and roles are maintained in database.
The wcf service should be internet enabled, but methods cannot be accessed anonymously.
Authorization should also be supported\
User and roles tables are not as per ASP.NET membership schema
Developers should not be constrained to have IIS installed and certificate configured
Authorized users should be able to access only his related information. He should be able to delete or update his related companies but not others.
To achieve 1 & 2, i had followed below link.
WCF security by Robbin Cremers
To cover point 3, i have provided custom implementation of MembershipProvider and RoleProvider classes and overridden methods ValidateUser and IsUserInRole respectively to fetch from my own schema user and roles table.
So far, so good authentication and authorization works fine.
Now the problem, the developers can't have IIS installed and certificates configured, so need to disable this in development mode. Hence have implemented custom CodeAccessSecurityAttribute class which would check development mode or production and use custom IPermission or PrincipalPermission.
[Question 1]My question is I don't anywhere people recommend this approach, so litte afraid whether this is the right approach or any better approach is there to handle this situation.
[Question 2] Lastly related to point 5, do i need to send some kind of token over? What is the best approach for this?
[Question 3] Performance impact in Robbin Cremers method, since for every service call, two extra database calls will be made in "ValidateUser" and "IsUserInRole" to authenticate and authorize. Is there a better way?
Sorry for the big question.

As far as I can tell from your scenario description, once you've created and hooked up your custom Membership / Role providers, you don't really need Message Security. Instead, the standard approach described in http://msdn.microsoft.com/en-us/library/dd560702(v=vs.95).aspx will work just fine (or http://msdn.microsoft.com/en-us/library/dd560704(v=vs.95).aspx if you want users to log in via your Silverlight app instead of via a regular web page). The browser then handles the sending of any required tokens automatically (via cookies), and I'm guessing ASP.NET is smart about caching authentication/authorization results so the overhead should not be incurred on every call.

Related

Does it make sense to use OAuth for a native desktop app that owns the resources it uses?

We have a native Windows desktop app that uses resources that we control on behalf of our customers. In the vein of not rolling our security infrastructure I am wondering if it makes sense to use an OAuth library / framework like IdentityServer (our frontend and backend stacks are .NET based with ASP.NET Core on the backend).
But from what I have read OAuth is all about giving an application access to resources that the users owns that are managed and controlled by another party without exposing the user's security credentials to the application.
Given the application is from our point of view "trusted" it seems more straight forward for the application to capture the password directly from the user and obtain an access token (e.g. bearer token) from directly from the back end rather then redirecting the user to the web browser.
Management of authorization levels for various resources is something we need to take care of robustly, as we will have multiple applications and users which will need configurable access levels to different types of resources, so I don't really want to be rolling our own solution for this.
We also want the ability for users to remain logged for indefinite periods of time but to be able to revoke their access via a configuration change on the back end.
Should we be using a different type of framework to help ensure our implementation is sound from a security point of view? If so please any suggestions of suitable technology options would be most helpful.
Alternatively, is there a OAuth flow that makes sense in this case?
It sounds like the "Resource Owner Password Credentials Grant" might help with your problem.
In the short term, the use of oauth may not seem very different from the normally "username password + rbac" based model, but the benefits may come in terms of scalability later on, for example when single sign-on needs to be implemented, or when it comes to the need to provide service interfaces to third parties.

Microservices - how to solve security and user authentication?

There is a lot of discussion about microservice architecture. What I am missing - or maybe what I did not yet understand is, how to solve the issue of security and user authentication?
For example: I develop a microservice which provides a Rest Service interface to a workflow engine. The engine is based on JEE and runs on application servers like GlassFish or Wildfly.
One of the core concepts of the workflow engine is, that each call is user centric. This means depending of the role and access level of the current user, the workflow engine produces individual results (e.g. a user-centric tasklist or processing an open task which depends on the users role in the process).
In my eyes, thus a service is not accessible from everywhere. For example if someone plans to implement a modern Ajax based JavaScript application which should use the workflow microservice there are two problems:
1) to avoid the cross-scripting problem from JavaScript/Ajax the JavaScript Web application needs to be deployed under the same domain as the microservice runs
2) if the microservice forces a user authentication (which is the case in my scenario) the application need to provide a transparent authentication mechanism.
The situation becomes more complex if the client need to access more than one user-centric microservices forcing user authentication.
I always end up with an architecture where all services and the client application running on the same application server under the same domain.
How can these problems be solved? What is the best practice for such an architecture?
Short answer: check OAUTH, and manage caches of credentials in each microservice that needs to access other microservices. By "manage" I mean, be careful with security. Specially, mind who can access those credentials and let the network topology be your friend. Create a DMZ layer and other internal layers reflecting the dependency graph of your microservices.
Long answer, keep reading. Your question is a good one because there is no simple silver bullet to do what you need although your problem is quite recurrent.
As with everything related with microservices that I saw so far, nothing is really new. Whenever you need to have a distributed system doing things on behalf of a certain user, you need distributed credentials to enable such solution. This is true since mainframe times. There is no way to violate that.
Auto SSH is, in a sense, such a thing. Perhaps it may sound like a glorified way to describe something simple, but in the end, it enables processes in one machine to use services in another machine.
In the Grid world, the Globus Toolkit, for instance, bases its distributed security using the following:
X.509 certificates;
MyProxy - manages a repository of credentials and helps you define a chain of certificate authorities up to finding the root one, which should be trusted by default;
An extension of OpenSSH, which is the de facto standard SSH implementation for Linux distributions.
OAUTH is perhaps what you need. It is a way provide authorization with extra restrictions. For instance, imagine that a certain user has read and write permission on a certain service. When you issue an OAUTH authorization you do not necessarily give full user powers to the third party. You may only give read access.
CORS, mentioned in another answer, is useful when the end client (typically a web browser) needs single-sign-on across web sites. But it seems that your problem is closer to a cluster in which you have many microservices that are managed by you. Nevertheless, you can take advantage of solutions developed by the Grid field to ensure security in a cluster distributed across sites (for high availability reasons, for instance).
Complete security is something unattainable. So all this is of no use if credentials are valid forever or if you do not take enough care to keep them secret to whatever received them. For such purpose, I would recommend partitioning your network using layers. Each layer with a different degree of secrecy and exposure to the outside world.
If you do not want the burden to have the required infrastructure to allow for OAUTH, you can either use basic HTTP or create your own tokens.
When using basic HTTP authentication, the client needs to send credentials on each request, therefore eliminating the need to keep session state on the server side for the purpose of authorization.
If you want to create your own mechanism, then change your login requests such that a token is returned as the response to a successful login. Subsequent requests having the same token will act as the basic HTTP authentication with the advantage that this takes place at the application level (in contrast with the framework or app server level in basic HTTP authentication).
Your question is about two independent issues.
Making your service accessible from another origin is easily solved by implementing CORS. For non-browser clients, cross-origin is not an issue at all.
The second problem about service authentication is typically solved using token based authentication.
Any caller of one of your microservices would get an access token from the authorization server or STS for that specific service.
Your client authenticates with the authorization server or STS either through an established session (cookies) or by sending a valid token along with the request.

Web API authentication/authorization using SSO instead of OAUTH - will it work?

Updated based on questions from #user18044 below
If a user is authenticated in two different web applications via 2 different SAML-based identity providers, and one of the applications needs to request data from a web API exposed by the other application, would it be possible to call the web API methods securely by virtue of the user's current authenticated status in both applications without separately securing the API methods via an API level authentication protocol such as OAUTH? Note that both applications are owned and operated by my company and share the same 2nd level domains and user base, even though the identity servers are different (one is legacy).
Some further information: Application A is a portal application that is going to host widgets using data supplied from Application B. Application A will only communicate with application B via a web API exposed by application B. Currently application B does not expose a web API (except internally to the application itself). This is new functionality that will need to be added to application B. Application A will use Okta as its SSO. Our lead architect's proposal is to continue to use a custom legacy IDP server that we developed internally based around using the dk.nita.saml20 DLL. They are both SAML based I believe, but I don't think they could share the same identity token without some retrofitting. But this is hitting the limits of my knowledge on the topic of authentication. :) I think our architect's plan was to have the user authenticate separately using the two different identity providers and then only secure the web API using CORS, his reasoning being that since the user is already known and authenticated to use application B, that there wouldn't be any security implications in allowing application A to call application B's web api methods, as the user should be authenticated in application B. This seems quirky to me, in that I can imagine a lot of browser redirects happening that might not be transparent to the user, but other than that, I'm just trying to figure out where the security holes might lie, because it feels to me that there would be some.
I know that this approach would not be considered a best practice, however with that being said, I really want to understand why not. Are there security implications? Would it even work? And if so, are there any "gotchas" or things to consider during implementation?
To reiterate, our lead architect is proposing this solution, and it is failing my gut check, but I don't know enough on the topic to be able to justify my position or else to feel comfortable enough to accept his. Hoping some security experts out there could enlighten me.
It's hard to answer without knowing more on how your current applications and APIs are secured exactly. Do the web application and its API have the same relying party identifier (i.e. can the same token be used to authenticate against both)?
If both web applications use the WS-Federation protocol to authenticate users, then most likely the SAML token will be stored in cookies that were set when the identity provider posted the token back to the application.
You do not have access to these cookies from JavaScript. If the web API that belongs to application B uses the same cookie based authentication mechanism, you could use this provided you allow for cross origin resource sharing.
If your web API uses something like a bearer token authentication scheme (like OAuth) or has a different relying party id in the STS, this would obviously not work.
I think the reason this fails your gut check is because you are basically accessing the web API in a way a cross-site request forgery attack would do it.
A problem I see with this approach is that if the user is not authenticated with the other web application, then the call to your API will also fail.
I agree with user18044 as far as it being based on a cross-site request forgery attack and the security between applications. Is it true that if User X has access to App A, that they will have access to App B and vice versa? If that is not the case, then each application will need to be authenticated separately...and it won't be a SSO. I found these links that might be helpful in your situation.
https://stackoverflow.com/questions/5583460/how-to-implement-secure-single-sign-on-across-various-web-apps
https://developer.salesforce.com/page/Implementing_Single_Sign-On_Across_Multiple_Organizations

WCF using 2 Authentication Methods With Windows Identity Foundation

I'm working on a WCF project that will be our new service layer.
These services will be called by 2 separate clients, the first of which is a WPF application and the other is an ASP.Net web application. The WPF client will be run by internal users and will authenticate with the service via domain authentication and run under the context of that user. The other will be used by external users and needs to authenticate using some separate mechanism then impersonate a "WebUser" account on our domain.
I'm reading a bit about Windows Identity Foundation and it sounds like this might be a good fit. Am I right in thinking I could have 2 token services, one for domain authentication and one for something like ASP.Net membership authentication (Or some similar equivalent) and have each client get it's token from the relevant STS and pass that along to the WCF service?
I'm assuming there is an STS I can use out of the box for domain authentication, but will I have to implement the second one myself to authenticate web users? I can't find a lot of information on this.
Am I thinking along the right lines or should I just be creating duel endpoints for each service each with a different authentication mechanism? Or should I be doing something completely different?
Thanks
The big advantage of using Claims-Based authentication / WIF is that both the task of authenticating the user AND the administration of the user's properties are moved way from the applications to the STS/Identity provider.
You are developing a service layer but the true benefits of using WIF will be for the applications written on top of your layer. The WPF application will no longer need to connect to the AD and fetch the user's groups to figure out what they are allowed to do. The groups will already be visible as claims in the token the user/WIF provides.
The web application (is it just one web application or more?) will no longer need the ASP.Net Membership database with accompanying user administration. This functionality gets moved to the STS.
There is a cost. (There always is, somehow...) Claims-Based authentication has a rather steep learning curve. It takes a while for the quarter to drop for all people involved.
So the answer to your question depends on what kind of users the web application(s?) built upon your service layer have and how many. And how much they wish to know about them. Can you perhaps trust Google / Facebook / Windows Live for authentication? Are the users already in an existing database within your domain? How much work will it take to maintain the user directories? Do your marketing people wish to send them emails regularly? Et cetera.
This is probably not just for the service layer's developers to decide, but something to discuss with people in the rest of your organisation.
If the benefits are not particularly big, the alternative is to simply keep these responsibilities at the web application's server. Each web application will have a good old ASP.Net membership database, it'll authenticate the user all by itself. When asking queries from the service layer, it'll present its web server certificate plus specify the user's name and type.
If the benefits are big enough, you can in principle use ADFS 2.0 for everything. It can also store external users nowadays and it's free if you already have Active Directory. Or the ThinkTecture 2.0 server that Ross recommends. It's easier to customize and perhaps your systems administrators and security folks will not be too enthusiastic about opening the firewall to the ADFS server.
Microsoft has some good reads on WIF, in particular an Overview of Claims-Based Architecture.
You should take a look at identity server as it can indeed handle this scenario.
The person who leads the project above has a great pluralsight video on this exact scenario! You need to sign up to watch it, but they offer a free trial.
Basically you get a token from the identity provider (windows ADFS for the internal client, and what ever you decide for the external users). You will give this token to the federated gateway (identity server probably, but it could be Azure ACS). This will return an authentication token that you can then use with your service.

Implementing "true" Single Sign-On: OpenID, something else, or custom hack?

We're trying to evaluate a solution to implement "true" SSO for multiple (already existing) web solutions. True SSO here means to login on any service, and be authenticated on all, without further actions from the user.
All of the applications we're going to use support OpenID and/or have plugins that allow OpenID, so this seems like something worth looking into. However, as I understand OpenID, the users would still be required to enter their OpenID credentials in each service.
Is there a sane way to implement SSO with automatic login once the OpenID provider has authenticated the user?
In an earlier project, we hacked up the PHP session data in the login procedures of two applications (both running on the same domain and server) so a login in the first application creates the session data for the other application as well. However, this is a very hacky solution and is prone to break when either application is updated, so we're trying to avoid it this time.
Are there any other SSO solutions that we could look into?
i am assuming that you have the control on the SSO implementation
there are some things you can do to make sure that once the user has been recognized by the SSO application, he will virtually automatically be logged in to your other applications
in your SSO application, create a whitelist of service providers. authentication request from those websites will be automatically approved. thus, user won't be asked to approve the request manually
in your application, set the return_to parameter as the page the user is intending to immediately open. don't simply set the return_to to that application homepage
by the way, the most standard openid implementation accepts any url. however, if you want to use the sso in a controlled environment, you can set the service provider to have a whitelist of trusted identity providers. after all, it's the service provider which initiated openid authentication.
Yes, there is a means to do this. Run an Application Server, Node Based, and register cross-domain techniques to offer cookie-credentialed (backed up by site-handshakes as each new user arrives, to scale better and minimize resource expenditure per-session).
I am working on such a beast right now, and I'm 5/6th done.
I have taken care of several annoying variables up front- including the means to assure unique user logon- and I've taken a stand on other issues- one just can't get everything done in one system. However, one can have a true SSO if one is willing to pull out some stops. It is YOUR stops which will define your solution. If you have not accurately portrayed your limitations then there isn't a solution which can be offered for implementation here, and the nature of your problem is ENTIRELY implementation- not theory. In theory you have 4-5 different options. In practice you will find your answers.