Here's something similar to this question on general SSO best-practices. What is the best approach for dealing with a disabled or for-whatever-reason-unreachable central identity provider. If your website allows users to login with their centrally-stored credentials, and the central service is not working or unreachable do you:
Allow users to re-enter their credentials on the local site, so that they can use the native login facility of the web application (or content management system or whatever)
Allow users to request another secondary set of credentials that they can use on the web application itself (i.e., a separate password they can use when the IDP is down) [NOTE: obviously this defeats the 'single credentials' goal just tossing out all ideas].
Allow the users to login using any of several various maintainers of the same credentials (by giving them multiple links to multiple providers, and then trying each one of them until one of them actually connects and works)
For probably apparent reasons, none of these solutions above seem attractive, so feel free to put these on the "worst practices" list while you answer with the best alternative approach.
I'd think the best way would be to have decentralized SSO, as is implemented in Open ID. If each account can have many providers, then if one provider goes down, you can fail over to another.
On the other hand, if centralized SSO is required. The only thing that I can think of would be to have the central authority generate a cryptographic certificate for each user. If the service has a fresh copy of the certificate revocation list and a cached copy of the central authority's public key, it can validate certificates even if the central authority is unavailable. Unfortunately, this method would probably suffer from usability issues as users would need to both keep a copy of their certificate handy and know what you're talking about when you ask for it.
Related
Our SAAS system is currently using standard Microsoft.AspNet.Identity.Owin libraries to authenticate users via Bearer tokens, as well social logins such as Facebook/Google/Twitter/etc.
Some of our users are asking for us to start allowing of authentication via ADFS.
I'm trying to understand how this can be done. Unfortunately, all of the blogs appear to dive right into the details without providing a good overview as to what's involved. Furthermore, most blogs talk about trusting a specific Active Directory, while we need to trust a whole number of possible customers' active directories - and do it dynamically. IE: customer registers for an account using custom username/password, then provides our SAAS application with some information about their AD. Afterwards, our SAAS application should trust authentication for users in that AD (just the auth part)
Can anyone provide information on what's involved?
TIA
Agree with #vibronet's points.
Another approach would be to add STS support to your SaaS application. This could be either WS-Fed or SAML. You have tagged the question with Azure so AAD could be an option.
You could then federate with any number of other STS's (like ADFS). Note as stated that each ADFS has to agree to add your metadata.
Another approach would be to use IDaaS (e.g. Auth0, Okta). These would do the Identity heavy lifting for you and would essentially provide the STS capability.
The question has 2 parts,
how to work with an ADFS instance and
how to deal with an arbitrary number of ADFS instances from different
owners.
The answer to 1) is to use the WS-Federation middleware, which can be added alongside the middlewares you are already using. However the initialization of that middleware requires knowledge of the location of the metadata document of the ADFS you want to target; furthermore, the ADFS administrator must provision your app explicitly or no tokens will be issued. Hence, the flow you are suggesting (temporary username/password and subsequent details exchange) might be tricky - but not impossible.
About 2) there isn't a way of wiring up an arbitrary number of different ADFS instances unless you modify the middleware setting pretty heavily. The actual answer is that the standard practice for dealing with that scenario is to rely on one intermediary ADFS (or equivalent) that can broker trust toward all others, while your app only needs to trust the intermediary ADFS.
In the context of WCF/Web Services/WS-Trust federated security, what are the generally accepted ways to authenticate an application, rather than a user? From what I gather, it seems like certificate authentication would be the way to go, IE generate a certificate specifically for the application. Am I on the right track here? Are there other alternatives to consider?
What you are trying to do is solve the general Digital Rights Management problem, which is an unsolved problem at the moment.
There are a whole host of options for remote attestation that involve trying to hide secrets of some sort (traditional secret keys, or semi-secret behavioural characteristics).
Some simple examples that might deter casual users of your API from working around it:
Include &officialclient=yes in the request
Include &appkey=<some big random key> in the request
Store a secret with the app and use a simple challenge/response: send a random nonce to the app and the app returns HMAC(secret,nonce))
In general however the 'defenders advantage' is quite small - however much effort you put in to try and authenticate that the bit of software talking to you is in fact your software, it isn't going to take your attacker/user much more effort to emulate it. (To break the third example I gave, you don't even need to reverse engineer the official client - the user can just hook up the official client to answer the challenges their own client receives.)
The more robust avenue you can pursue is licencing / legal options. A famous example would be Twitter, who prevent you from knocking up any old client through their API licence terms and conditions - if you created your own (popular) client that pretended to the Twitter API to be the official Twitter client, the assumption is their lawyers would come a-knocking.
If the application is under your control (e.g. your server) then by all means use a certificate.
If this is an application under a user control (desktop) then there is no real way to authenticate the app in a strong way. Even if you use certificate a user can extract it and send messages outside the context of that application.
If this is not a critical secure system you could do something good enough like embedding the certificate inside the application resources. But remember once the application is physically on the user machine every secret inside it can sooner or later be revealed.
I have a web based application which is used to find information about various assets in a facility. This provides only search capability, no CRUD operations allowed from the application (except for READ). This web application is always kept open in a touchscreen device (ie workstation) and this could be used by any of the facility staff. The user does not want to initiate login and logout for each of the search operation.
We are planning on deploying the web application onto the cloud. Although it is not a need to authenticate the user who is accessing the web-application, it is still a need to ensure that information about assets in the facility are not accessible by others. How do I build this authentication layer? The various options I can think of are:
1. Include userid/password in the URL as parameters. I could create a userid/password for each of the facility. Simple, but userid/password area always visible.
2. Certificate based approach. Certificates are created for each of these workstations and deployed on those workstations. Quite secure, but has the challenge of managing the certs life-cycle. As well challenge of configuring the web-servers with certs from different facilities???
Any suggestions?
Thanks,
Prasanna
A simple, but not secure thing. Do an IP check and if the IP is from your facility then grant access.
The second, but secure method is to do a verification at the start of the application with just a password and store a session , so that you will know that people from your facility are accessing the site..
We're trying to evaluate a solution to implement "true" SSO for multiple (already existing) web solutions. True SSO here means to login on any service, and be authenticated on all, without further actions from the user.
All of the applications we're going to use support OpenID and/or have plugins that allow OpenID, so this seems like something worth looking into. However, as I understand OpenID, the users would still be required to enter their OpenID credentials in each service.
Is there a sane way to implement SSO with automatic login once the OpenID provider has authenticated the user?
In an earlier project, we hacked up the PHP session data in the login procedures of two applications (both running on the same domain and server) so a login in the first application creates the session data for the other application as well. However, this is a very hacky solution and is prone to break when either application is updated, so we're trying to avoid it this time.
Are there any other SSO solutions that we could look into?
i am assuming that you have the control on the SSO implementation
there are some things you can do to make sure that once the user has been recognized by the SSO application, he will virtually automatically be logged in to your other applications
in your SSO application, create a whitelist of service providers. authentication request from those websites will be automatically approved. thus, user won't be asked to approve the request manually
in your application, set the return_to parameter as the page the user is intending to immediately open. don't simply set the return_to to that application homepage
by the way, the most standard openid implementation accepts any url. however, if you want to use the sso in a controlled environment, you can set the service provider to have a whitelist of trusted identity providers. after all, it's the service provider which initiated openid authentication.
Yes, there is a means to do this. Run an Application Server, Node Based, and register cross-domain techniques to offer cookie-credentialed (backed up by site-handshakes as each new user arrives, to scale better and minimize resource expenditure per-session).
I am working on such a beast right now, and I'm 5/6th done.
I have taken care of several annoying variables up front- including the means to assure unique user logon- and I've taken a stand on other issues- one just can't get everything done in one system. However, one can have a true SSO if one is willing to pull out some stops. It is YOUR stops which will define your solution. If you have not accurately portrayed your limitations then there isn't a solution which can be offered for implementation here, and the nature of your problem is ENTIRELY implementation- not theory. In theory you have 4-5 different options. In practice you will find your answers.
I'm just wondering how cross-site authentication is handled for completely external companies?
e.g. My site acts a "portal" onto another completely external site.
Is there a standard way of doing this so the user is not prompted to log in again?
I know with e.g. eBay-> PayPal you have to re-authenticate, but is this the only/most sensible way?
It's going to depend on what that other site uses as an authentication method.
Look at SAML (which in essence a way of saying to the other site that they can trust your assertion that this user is who you say he is). OpenID is another system doing much the same thing.
In general, this is federated identity management,
In my opinion the best way to do this is to create a third application which is responsible for authentication and permissions. I've written a blog entry about one such application I've created for my own pet projects.
http://www.netortech.com/Blog/Entry/12/Web-passport-services