I have a portal application that loads external content (widgets) via an iframe. Users login to CAS via the portal itself. There are a few portal APIs, though, that need to be called from that external content. What information do I have to pass from the portal to the widgets that the widgets can use to make these calls without being rejected by CAS?
UPDATE
The more I investigate, the more I think that my question boils down to how CAS actually does what it's supposed to do. In other words, how can I go from one site where I've authenticated to another and tell it that I've already done the authentication thing. What's the mechanism behind that and how can I employ in in a web context.
The portal scenario you describe is exactly what CAS' proxy ticketing was designed for. We use it with an iframe-based web portal system and it works fine.
The CAS proxy ticketing mechanism allows a client (your portal) to dish out service tickets to other clients (the widgets loaded in your portal's iframes). This saves your users a trip through the CAS server for each widget that their browser loads. Proxing is also useful if you're trying to use CAS for web service authentication (i.e. when one web service needs to connect to another CAS-protected web service).
Note though that for your purpose proxy ticketing isn't actually necessary. Your portal-iframe setup should work without it. But without proxy ticketing, each widget will have to go through the CAS server as it loads. At the very least this would slow down load times.
A while back I wrote a guide for setting up CAS proxy ticketing for RubyCAS-Client. The instructions are specific to the Ruby client, but they should give you a good overview of how CAS proxying works. Admittedly the implementation is a bit complicated -- mostly due to the "Proxy Granting Ticket" negotiation process:
http://rubycas-client.rubyforge.org/files/README_txt.html
(scroll down to the "How to act as a CAS proxy" section about 2/3rd down)
It looks like I may be asking CAS to do more than it's capable of doing. I've been thinking of it as an SSO engine where a given session can be passed around so that authentication only happens once. Instead, it seems that CAS is primary geared to be a centralized auth service (yes, I see the irony that this is what the acronym actually stands for). By handing authentication requests off to a central server, a single cookie can be read by that server. Stateless connections like APIs, then, cannot be validated this way.
It looks like CAS' proxy tickets may offer some hope, but I'm not ready to venture down that path just yet.
Related
There is a lot of discussion about microservice architecture. What I am missing - or maybe what I did not yet understand is, how to solve the issue of security and user authentication?
For example: I develop a microservice which provides a Rest Service interface to a workflow engine. The engine is based on JEE and runs on application servers like GlassFish or Wildfly.
One of the core concepts of the workflow engine is, that each call is user centric. This means depending of the role and access level of the current user, the workflow engine produces individual results (e.g. a user-centric tasklist or processing an open task which depends on the users role in the process).
In my eyes, thus a service is not accessible from everywhere. For example if someone plans to implement a modern Ajax based JavaScript application which should use the workflow microservice there are two problems:
1) to avoid the cross-scripting problem from JavaScript/Ajax the JavaScript Web application needs to be deployed under the same domain as the microservice runs
2) if the microservice forces a user authentication (which is the case in my scenario) the application need to provide a transparent authentication mechanism.
The situation becomes more complex if the client need to access more than one user-centric microservices forcing user authentication.
I always end up with an architecture where all services and the client application running on the same application server under the same domain.
How can these problems be solved? What is the best practice for such an architecture?
Short answer: check OAUTH, and manage caches of credentials in each microservice that needs to access other microservices. By "manage" I mean, be careful with security. Specially, mind who can access those credentials and let the network topology be your friend. Create a DMZ layer and other internal layers reflecting the dependency graph of your microservices.
Long answer, keep reading. Your question is a good one because there is no simple silver bullet to do what you need although your problem is quite recurrent.
As with everything related with microservices that I saw so far, nothing is really new. Whenever you need to have a distributed system doing things on behalf of a certain user, you need distributed credentials to enable such solution. This is true since mainframe times. There is no way to violate that.
Auto SSH is, in a sense, such a thing. Perhaps it may sound like a glorified way to describe something simple, but in the end, it enables processes in one machine to use services in another machine.
In the Grid world, the Globus Toolkit, for instance, bases its distributed security using the following:
X.509 certificates;
MyProxy - manages a repository of credentials and helps you define a chain of certificate authorities up to finding the root one, which should be trusted by default;
An extension of OpenSSH, which is the de facto standard SSH implementation for Linux distributions.
OAUTH is perhaps what you need. It is a way provide authorization with extra restrictions. For instance, imagine that a certain user has read and write permission on a certain service. When you issue an OAUTH authorization you do not necessarily give full user powers to the third party. You may only give read access.
CORS, mentioned in another answer, is useful when the end client (typically a web browser) needs single-sign-on across web sites. But it seems that your problem is closer to a cluster in which you have many microservices that are managed by you. Nevertheless, you can take advantage of solutions developed by the Grid field to ensure security in a cluster distributed across sites (for high availability reasons, for instance).
Complete security is something unattainable. So all this is of no use if credentials are valid forever or if you do not take enough care to keep them secret to whatever received them. For such purpose, I would recommend partitioning your network using layers. Each layer with a different degree of secrecy and exposure to the outside world.
If you do not want the burden to have the required infrastructure to allow for OAUTH, you can either use basic HTTP or create your own tokens.
When using basic HTTP authentication, the client needs to send credentials on each request, therefore eliminating the need to keep session state on the server side for the purpose of authorization.
If you want to create your own mechanism, then change your login requests such that a token is returned as the response to a successful login. Subsequent requests having the same token will act as the basic HTTP authentication with the advantage that this takes place at the application level (in contrast with the framework or app server level in basic HTTP authentication).
Your question is about two independent issues.
Making your service accessible from another origin is easily solved by implementing CORS. For non-browser clients, cross-origin is not an issue at all.
The second problem about service authentication is typically solved using token based authentication.
Any caller of one of your microservices would get an access token from the authorization server or STS for that specific service.
Your client authenticates with the authorization server or STS either through an established session (cookies) or by sending a valid token along with the request.
I have a web based application which is used to find information about various assets in a facility. This provides only search capability, no CRUD operations allowed from the application (except for READ). This web application is always kept open in a touchscreen device (ie workstation) and this could be used by any of the facility staff. The user does not want to initiate login and logout for each of the search operation.
We are planning on deploying the web application onto the cloud. Although it is not a need to authenticate the user who is accessing the web-application, it is still a need to ensure that information about assets in the facility are not accessible by others. How do I build this authentication layer? The various options I can think of are:
1. Include userid/password in the URL as parameters. I could create a userid/password for each of the facility. Simple, but userid/password area always visible.
2. Certificate based approach. Certificates are created for each of these workstations and deployed on those workstations. Quite secure, but has the challenge of managing the certs life-cycle. As well challenge of configuring the web-servers with certs from different facilities???
Any suggestions?
Thanks,
Prasanna
A simple, but not secure thing. Do an IP check and if the IP is from your facility then grant access.
The second, but secure method is to do a verification at the start of the application with just a password and store a session , so that you will know that people from your facility are accessing the site..
We're trying to evaluate a solution to implement "true" SSO for multiple (already existing) web solutions. True SSO here means to login on any service, and be authenticated on all, without further actions from the user.
All of the applications we're going to use support OpenID and/or have plugins that allow OpenID, so this seems like something worth looking into. However, as I understand OpenID, the users would still be required to enter their OpenID credentials in each service.
Is there a sane way to implement SSO with automatic login once the OpenID provider has authenticated the user?
In an earlier project, we hacked up the PHP session data in the login procedures of two applications (both running on the same domain and server) so a login in the first application creates the session data for the other application as well. However, this is a very hacky solution and is prone to break when either application is updated, so we're trying to avoid it this time.
Are there any other SSO solutions that we could look into?
i am assuming that you have the control on the SSO implementation
there are some things you can do to make sure that once the user has been recognized by the SSO application, he will virtually automatically be logged in to your other applications
in your SSO application, create a whitelist of service providers. authentication request from those websites will be automatically approved. thus, user won't be asked to approve the request manually
in your application, set the return_to parameter as the page the user is intending to immediately open. don't simply set the return_to to that application homepage
by the way, the most standard openid implementation accepts any url. however, if you want to use the sso in a controlled environment, you can set the service provider to have a whitelist of trusted identity providers. after all, it's the service provider which initiated openid authentication.
Yes, there is a means to do this. Run an Application Server, Node Based, and register cross-domain techniques to offer cookie-credentialed (backed up by site-handshakes as each new user arrives, to scale better and minimize resource expenditure per-session).
I am working on such a beast right now, and I'm 5/6th done.
I have taken care of several annoying variables up front- including the means to assure unique user logon- and I've taken a stand on other issues- one just can't get everything done in one system. However, one can have a true SSO if one is willing to pull out some stops. It is YOUR stops which will define your solution. If you have not accurately portrayed your limitations then there isn't a solution which can be offered for implementation here, and the nature of your problem is ENTIRELY implementation- not theory. In theory you have 4-5 different options. In practice you will find your answers.
I'm writing a web based application that will have its own authorization/authentication mechanism (traditional cookie/session based user/pass). However, depending on the organization that licenses the software, I want them to be able to plug in their own existing internal authentication system as a way to replace mine. Ideally, they'd have to run as little code as possible on their end; I'm trying to make this a mostly hosted service. I'm aware of the existence of OAuth, but don't entirely understand how I would go about implementing the system at a higher level. Any tips would be appreciated.
What platform are you developing for? PHP, Java, .NET, or other?
You should look into SAML and OpenID in addition to OAuth. These protocols are used for website to website authentication, more often than OAuth, which is mainly used for client applications on the desktop/mobile. It can be used but this is what people tend to use it for.
In general you are considered to be the service provider. The other organizations are identity providers. In SAML you would redirect a user to the identity provider who would authenticate (and possibly authorize) a user. They would be redirected back to the service provider which would then be able to log them in.
See the links from another post of mine for links to protocol documentation. Google Apps also have a good diagram of single sign-on with SAML in action.
Your question entangles the very things (authn/authz, i.e. policy, and application) that you want to disentangle. The answer you're looking for requires separating those concerns.
The "standard" answer is to separate authn/authz policy, typically using a PEP (policy enforcement point) to enforce decisions made by a PDP (policy decision point). SAML provides standards for communicating between the two.
You wind up with your application (and typically many others) guarded by the PEP. This can be embedded inside the application (for example as a Tomcat interceptor), but better, running in a separate container as a proxy. The only thing reachable from outside is the PEP. This examines each request, ensures the user is authenticated, and (for SSO) ensures that each request contains a security token.
If not, the PEP forwards the request to the PDP for authentication (login screen). The PDP attaches a security token and forwards the request back to the PEP. Since the request now has a valid token, the PEP forwards it to the application behind the firewall.
We are developing an app that consists of a web server that hosts a web service (amongst other things) and a client that will be communicating with that web service. Both the client app and the server are expected to be used within a corporate firewall. This application will be packaged up and deployed to organizations across the world—so it needs to be flexible enough to work in multiple types of environments.
My question revolves around web service authentication and what is appropriate for real world scenarios. I know some companies have proxy servers that require a separate authentication. How often is this a requirement across organizations? When does the proxy server force the user to authenticate (can you access internal sites without authenticating.. is the authentication for only external sites)?
Reason I ask these questions, is I’m not sure what kind of capability we should build into our client application for authentication to the web service. By default, we are taking the current user credentials and passing that up to the server. Do you think this is sufficient? In a case where a company will require some form of alternate authentication for internal access, this will not work. My question revolves around this last case—how often does it happen? Why would a company force alternate credentials for internal access?
Thanks!
Why not make it configurable? Further, use WCF and you have the ability to configure just about anything you might need, in most cases without changing your code.
If Internet Explorer can reach a site through the proxy server without prompting the user, your call to the web service should "just work". If the user is prompted by IE, you'll need to put together a way to fill in the proxy server authentication information.
I've run into quite a few problems getting web services rock solid, but never had a proxy server authentication issue.