CAS communicating with an external user store via REST - authentication

Basically we are trying to implement CAS as an SSO solution in our org. All our user profiles are stored in a custom userstore that exposes certain REST APIs for fetching this info. What I want to know is whether I can use CAS here? Basically instead of querying a DB or an LDAP server CAS would just query this custom userstore via REST. Does CAS provide such a functionality?

You certainly can. CAS by default does not provide this functionality. However, you can leverage the existing API to design & develop a custom authN handler that talks to the custom store through the REST API.
See the deployerConfigContext.xml file to review how authN handlers are defined and placed.
Also, if you do design the custom handler, please try and contribute that back to the community/codebase through a pull request on Github and discuss the details on either #cas-dev or #cas-user. There could be a possibility of merging the changes to the codebase to accommodate future similar requests.

Related

Security implications of using Keycloak as an REST API and avoiding Keycloak forms all together?

I'm currently working on a project where we are using OpenID Connect and Oauth2 with Keycloak's default forms.
We have requirements to implement 2FA. In an ideal world we'd scrap the keycloak forms all together and just use keycloak as a headless API and build the login forms in the main application itself.
The reasons being
We have components built in Vue.js we would like to re-use (e.g. password/code inputs, password strength indicator etc)
We don't want to maintain the same styles in two different projects
We don't want to maintain or be limited by custom templates
Don't want to write custom behaviour in vanilla js
After doing research I've found that using keycloak as an API is not recommended because the redirection between the client and 3rd party login acts as an additional layer of security, and is part of the OAuth2.0 model. We're storing users medical information so security is a concern.
What would you guys suggest?
You are right that using an OAuth server through an API is not recommended. Redirects are an important part of the security of an OAuth flow. This of course creates all the drawbacks that you mentioned - having to maintain multiple codebases with the same functionality.
A solution to this problem is to use hypermedia API with strong security mechanisms, which can be used to perform OAuth flows. Unfortunately this is not a standard yet, and it is an emerging feature. You can read how such an API works here and here you can find an in-depth description of the security features of an implementation we did at Curity.
It will definitely not be an easy task to implement it in Keycloak currently, but there most probably there is no other option to solve this problem - as you said you need 2FA, without 2FA an option is to use the Resource Owner Password Flow.

Does it make sense to use OAuth for a native desktop app that owns the resources it uses?

We have a native Windows desktop app that uses resources that we control on behalf of our customers. In the vein of not rolling our security infrastructure I am wondering if it makes sense to use an OAuth library / framework like IdentityServer (our frontend and backend stacks are .NET based with ASP.NET Core on the backend).
But from what I have read OAuth is all about giving an application access to resources that the users owns that are managed and controlled by another party without exposing the user's security credentials to the application.
Given the application is from our point of view "trusted" it seems more straight forward for the application to capture the password directly from the user and obtain an access token (e.g. bearer token) from directly from the back end rather then redirecting the user to the web browser.
Management of authorization levels for various resources is something we need to take care of robustly, as we will have multiple applications and users which will need configurable access levels to different types of resources, so I don't really want to be rolling our own solution for this.
We also want the ability for users to remain logged for indefinite periods of time but to be able to revoke their access via a configuration change on the back end.
Should we be using a different type of framework to help ensure our implementation is sound from a security point of view? If so please any suggestions of suitable technology options would be most helpful.
Alternatively, is there a OAuth flow that makes sense in this case?
It sounds like the "Resource Owner Password Credentials Grant" might help with your problem.
In the short term, the use of oauth may not seem very different from the normally "username password + rbac" based model, but the benefits may come in terms of scalability later on, for example when single sign-on needs to be implemented, or when it comes to the need to provide service interfaces to third parties.

How can I send a request from a cumulocity application to a microservice without authorization

within Cumulocity (hosted) we have our own application with plugins written using AngularJS.
From this application we want to send a request to a microservice that we have running as well.
However, the microservice ask for authorization information when sending a get request. (How) Can we overcome this?
The reason we have decided to do it like this is so that we do not have to expose critical information.
Thanks
All microservice invocations require authentication with a valid user in the tenant.
If you really want to expose something without authentication, you can create a dummy user with no other permissions in the tenant and hardcode the credentials of that user in your AngularJS code. However, this is a risk for you, as it makes it easy for malicious users to bombard your tenant with potentially charged API requests (depending on your service provider pricing model).
If the information that you want to expose is not dynamic (maybe tenant configuration or so), you could upload such information as part of a web application. E.g., you upload a "config" application with a single file "config.json" and load that from your AngularJS application using the URL /apps/config/config.json. Not sure if that is your case.
All requests to Cumulocity including those to microservices must be authenticated fully. There is no way to access a microservice without valid credentials.
The platform needs this information to determine if the user and tenant have sufficient access rights to perform the requested action. Even if your microservice does not require special permissions to access. Cumulocity will at least need to check if the originating tenant is allowed to use the microservice.

Implementing OAuth 2 in a multi-tenant application using dynamic scopes

I'm currently trying to migrate a multi-tenant system from a "custom" authentication and authorization implementation to OAuth2.
The multi-tenancy model is very similar to GitHub's structure, so I'm going to use it as the main example. Let's assume that in the application we have users, repositories and organizations. Users have access to repositories directly, or through organizations they are members off. Depending on their access rights, users should have different permissions towards repositories and the sub-resources (like /repository/issues), or organizations and their sub-resources (/organization/members) for users who manage them. Unlike, GitHub's OAuth2 solution, this system should be able to provide different levels of permissions across repositories or organizations (GitHub does it at a different level with a custom implementation).
The goal is to keep the logic as simple as possible, encapsulate everything in an authorization service and piggyback on OAuth2 as much as possible.
My approach was to deploy a generic OAuth2 service, and handle the permissions using dynamic scopes:
user:read
user:write
repo:read
org:read
repo:<repo_id>:issues:read
repo:<repo_id>:issues:write
org:<org_id>:members:read
org:<org_id>:members:write
This enables granular permissions for clients and users, such as a user being able to read + write issues in one of his repos, but only read in another.
While this seems to solve the problem, the main limitation is being able to request scopes. Since users would not know the ids for the repos and orgs they have access to, they are not able to request a correct list of scopes when contacting the authorization server.
In order to overcome this I considered 2 solutions:
Solution 1
Issue a token for repo:read and org:read
Retrieve list of repos and orgs the user has access to
Issue a second token with all necesarry scopes
On a deeper thought, this turns out not to be viable since it would not support grants like implicit for authorization_code unless the authorization server would deal with this "discovery" of resources.
Solution 2
The first 2 steps are common to the first solution, while for the 3'rd step, the users would only be able to issue tenant scoped tokens. By extending the OAuth2 with a parameter identifying the tenant (/authorize?...&repo=<repo_id>), clients using authorization_code grant would have to issue tokens for every tenant. The token issued on step 1 would have to persist the identity of the user on the authorization server and eliminate the need of re-authentication when a user would switch between tenants. The downside of this approach would be that it increases the complexity of client integrations and that it might defy the standard in some way.
I'm looking for a second opinion on this, which would possibly simplify the problem and make sure the solution adheres to the standard.
tldr; What about using self contained access tokens which convey user identity information and hold access policy defined at API endpoint ?
The problem you face right now is due to mismatch of what OAuth 2.0 scope is capable of. Scope value in OAuth 2.0 is defined to be used by the client application.
The authorization and token endpoints allow the client to specify the
scope of the access request using the "scope" request parameter.
But in your approach, you try to make it to be defined by end user (the human user).
A solution would be to make authorization server independent of permission details. That means, authorization server only issue tokens which are valid for your service/system. These token can be self-contained, holding user identifier and optionally organisation details (claims). It may contain other details that are required by your service (upto you to decide). Ideal format is to make it an JWT.
Once your client (the consumer of system, like GIT website) obtain this token, it can call the system backend. Once your system backed recieve the token, it can validate the token for integrity, required claims and use these claims to identify what resources are granted for this specific user. Permission levels you defined for scope now are stored with your service backend.
Advantage of this is the ability to let user identity to be reside anywhere. For example you can use Google or Auzure AD and as long as they can provide you a valid token, you can support such users to use your system. This is ideal as permissions are not stored in them. And super users will have ability to define and maintain these permissions.
Agree with everything mentioned by #Kavindu Dodanduwa but would like to add some additional details here.
This problem indeed stays beyond what standard OAuth 2.0 covers. If you want to manage permissions per-resource (e.g. per repo or organization) this should be handled in your service or a gateway in front of it. Typically you need some kind of access-control list (ACL) stored on your backend which you can then use to authorize users.
If you'd like to look at existing standards check out XACML and UMA (which is an extension of OAuth 2.0). However I find them rather complicated to implement and manage especially in distributed environment.
Instead I'd suggest an alternative solutions using a sidecar for performing authorization on your service. Check out related blog posts:
Building a fine-grained permission system in a distributed
environment: Architecture
Building a Fine-Grained Permissions
System in a Distributed Environment: Implementation
Open Policy Agent could be a good solution for such architecture.

Need help in implementing WCF authentication and authorization in internet context

I need to create a wcf service which would be consume by silverlight apps downloaded through internet. Basically the users are not part of any windows domain but there credentials and roles are maintained in database.
The wcf service should be internet enabled, but methods cannot be accessed anonymously.
Authorization should also be supported\
User and roles tables are not as per ASP.NET membership schema
Developers should not be constrained to have IIS installed and certificate configured
Authorized users should be able to access only his related information. He should be able to delete or update his related companies but not others.
To achieve 1 & 2, i had followed below link.
WCF security by Robbin Cremers
To cover point 3, i have provided custom implementation of MembershipProvider and RoleProvider classes and overridden methods ValidateUser and IsUserInRole respectively to fetch from my own schema user and roles table.
So far, so good authentication and authorization works fine.
Now the problem, the developers can't have IIS installed and certificates configured, so need to disable this in development mode. Hence have implemented custom CodeAccessSecurityAttribute class which would check development mode or production and use custom IPermission or PrincipalPermission.
[Question 1]My question is I don't anywhere people recommend this approach, so litte afraid whether this is the right approach or any better approach is there to handle this situation.
[Question 2] Lastly related to point 5, do i need to send some kind of token over? What is the best approach for this?
[Question 3] Performance impact in Robbin Cremers method, since for every service call, two extra database calls will be made in "ValidateUser" and "IsUserInRole" to authenticate and authorize. Is there a better way?
Sorry for the big question.
As far as I can tell from your scenario description, once you've created and hooked up your custom Membership / Role providers, you don't really need Message Security. Instead, the standard approach described in http://msdn.microsoft.com/en-us/library/dd560702(v=vs.95).aspx will work just fine (or http://msdn.microsoft.com/en-us/library/dd560704(v=vs.95).aspx if you want users to log in via your Silverlight app instead of via a regular web page). The browser then handles the sending of any required tokens automatically (via cookies), and I'm guessing ASP.NET is smart about caching authentication/authorization results so the overhead should not be incurred on every call.