I have a project that requires ABAC for access control for my projects resources. I've been looking at OPA and authzforce as options to implement ABAC and OPA looks like it might be less complicated than authzforce. I see that OPA compares itself to other systems and paradigms but the example it gave for ABAC leaves a lot to be desired. Mainly because ABAC requires the use of points that enforce policies, makes decisions around policies, fetch subject and object attributes for policy decisions. I feel like OPA has everything but the last part covered but it's hard to tell if that's true since their ABAC example is just a one-off.
I've been looking all over the internet for examples of OPA being used as an implementation for ABAC but I haven't found anything.
My project is a web app that allows end-users to create resources and create policies for their resources. I plan to create a UI for the end-users to create their policies. my plan is to abstract away the coding aspect of it and instead, give them dropdowns and buttons this UI will use a custom syntax behind the scenes that I will interpret into an OPA policy.
The main issue I'm having is how to implement this as ABAC, is it as straight forward as building the part that will fetch the attributes for the subject, object, and environment and create the glue between it and OPA (essentially creating a PIP) since OPA itself appears to be a defacto PEP and PDP?
I feel like I'm drowning in the documentation and there seems to be quite a bit missing from OPAs own docs to explain how this can be done.
OPA looks like it might be less complicated than authzforce
There are a couple pros and cons to either approach. First of all, as you realized both OPA and AuthZForce are ABAC implementations (you can read more on ABAC here and here).
OPA
Open Policy Agent is a relatively novel model aimed mainly (but not only) at tackling fine-grained authorization for infrastructure (e.g. Kubernetes). They even have pre-built integration points for Istio and Kubernetes. OPA provides a PEP (enforcement / integration) and a PDP (policy decision point) though it does not necessarily call them that way. The language it uses is called REGO (a derivative of DATALOG).
OPA itself appears to be a defacto PEP and PDP
Yes you are absolutely right and that puts the burden on you to implement an alternative for PIPs.
I feel like I'm drowning in the documentation and there seems to be quite a bit missing from OPAs own docs to explain how this can be done.
Reach out to Styra - they sell services around OPA. Alternatively reconsider your choice and look into XACML (see below).
Drawbacks
the language (REGO) is not easy to understand
the language is not standardized
OPA does not support Policy Information Points (PIP) - that's by design.
Implementations
I've been looking all over the internet for examples of OPA being used as an implementation for ABAC but I haven't found anything.
Have a look at the work they did at Netflix. That's the main implementation I am aware of. You can also reach out to Styra, the company behind OPA, and they'll be able to help out.
AuthZForce
AuthZForce is an open-source Java implementation of the XACML (eXtensible Access Control Markup Language xacml) standard. It provides a full ABAC implementation (PAP, PEP, PDP, PIP). It's part of Fiware (an open source initiative) and it's actively developed by a team at Thales.
AuthZForce Drawbacks
it does not seem to have a graphical interface to author policies. I found a reference to KEYROCK PAP but couldn't see any screenshot
it does not support ALFA, the abbreviated language for authorization.
Implementations
There are many other implementations of XACML you can consider (both open-source and commercial):
AT&T XACML
SunXACML
WSO2 - part of their WSO2 Identity Server platform - it's called Balana
Axiomatics (commercial - this is where I work) - we have a large customer base using our platform ranging from Fortune 50 companies to agile startups.
Benefits of XACML & ALFA
One of the key benefits of XACML / ALFA is that they are standards and widely adopted. The standard has been around since 2001 and interoperates with other standards e.g. SAML, OAuth, and SCIM.
Perhaps the most concrete answer is a detailed description of how Chef Automate uses OPA to implement application authorization.
More generally, we are planning a guide describing how to use OPA for application authorization--it requires more detail than a SO answer. But using OPA (or any policy engine) for application authorization depends a bit on your application, its architecture, your SLAs, etc. But here are a few key issues to consider:
Policy: how much expressiveness do your end-user policies need? Do they just define say user-attributes or user-roles, or do they map user attributes/roles to permissions too? OPA lets you solicit those end-user policies as JSON objects and then write policy rules that make decisions using those JSON objects. And for efficiency, you can compile those JSON objects into bona-fide OPA rules.
Enforcement: where do you need to enforce authorization policies (e.g. gateway, microservice, database)? Your requirements around latency, size of data, expressiveness of database query languages will all impact this decision. OPA is flexible enough to help with all of these and has a couple of specific integrations that will help: Envoy and similar service-mesh systems for microservices and SQL/ElasticSearch for databases.
Data: how much attribute data is there, how frequently does it change, what consistency guarantees do you need, what mechanisms do you have for getting the data into OPA (e.g. caches, event-streams). Here's a guide for injecting data into OPA; it uses LDAP/AD as an example data-source, but the principles are the same for any data-source.
We are always happy to talk through the details of your application and help you find the right fit for OPA. Feel free to reach out on the OPA slack channel.
Related
My team and I are considering using an authentication SASS.
I am definitely sure that the SASS solution will eventually be more secure than the hand made one (even using proper libs) and in our case we can afford the money.
But the thing that makes me hesitate the most is how this service will discuss with the rest of my app. Will it actually simplify our code or make it a more complicated knot bag in the end?
I understand that user list with credentials, and eventual attributes are stored there.
But then, what should I store in my app's (SQL) DB?
Say I have users that belong to companies and other assets on a 1 - n relationship.
I like to write things like:
if current_user.company.assets includes current_user.assets do
// logic here
end
Should I:
only store userIds in these tables?
=> But then, I can't user relationships between user attributes and rest of the DB attributes
store some kind of cached data in a so-called sessions table so I can use it as a disposable table of active users?
=> It feels less secure and implies duplicated content which kind of sucks.
making the user object virtual loaded with the auth SASS data and use it there.
=> Slightly better, but I can't make queries over users, even company.users is not available
other proposition?
I'm highly confused on the profits of externalizing what's usually the core object of an app. Am I thinking too monolithically? :-D
Can anyone make suggestions? Even better would be feedback from devs who implemented it.
I found articles on the web, they talk about security and ease of implementation, but don't tackle this question properly.
Cheers, I hope the question doesn't get closed as I'm very curious about the answer.
I'm highly confused on the profits of externalizing what's usually the core object of an app. Am I thinking too monolithically? :-D
Yes, you are thinking too monolithically by assuming that you have to write and control all the code. You have asked a question about essentially outsourcing Authentication to an existing SASS based solution, when you could just as easily write your own. This is a common mistaken assumption that many developers make, especially in the area of Security for applications.
Authentication is a core requirement for many solutions, but it is very rarely a core aspect or feature of the solution.
By writing your own solution to what is a generally standard concept (Authentication) you have to write, test and maintain your logic, including keeping up to date with latest security trends over the lifetime of the product. In terms of direct Profit/Cost:
Costs you a lot of time and effort to get it right
Your own solution will add a layer of technical debt, future developers (internal or external) will need to familiarise themselves with your implementation before they can even start maintenance or improvement work
You are directly assuming all the risks and responsibilities to maintain the security of the solution and its data.
Depending on the type of data and jurisdiction of your application you may be asked down the track to implement multi-factor authentication or to force all users to re-register to adopt stronger security protocols, this can be a lot of effort for your own solution, or a simple tick of a box in the configuration of your Authentication provider.
Business / Data Schema
You need to be careful to separate the two concepts of Authentication and a User in the business domain. Regardless of where or what methodology you use to Authenticate your users, from a data integrity point of view it is important that there is a User concept in the database to associate related data for each user.
So there is no way around it, your business domain logic requires a table to represent a User in this business domain.
This User table should have an arbitrary Primary Key that is specific to the Application domain, and in that table store the token that that is used to map that business user to the Authentication process. Then throughout your model, you can create FK references back to the user table.
In this way it may be possible for you to map users to multiple different providers, or to easily change the provider with minimal or zero impact on the rest of the business domain model.
What is important from a business process point of view is that the application can resolve the correct business User from the token or claims provided in the response from the authentication provider.
Authentication
If SSO (Single Sign On) is appealing to you then the choice of which Authentication provider to use can become an issue depending on the nature of your solution and the type of users who will be Authenticating. If the solution is tenanted to other businesses and offers B2B, or B2C focused activities then an Enterprise authentication solution like Azure AD, or Google Cloud Identity might make sense. You would register your product in the client's authentication domain so that they can manage their users and access levels.
If the solution is more public focussed then you might consider other social media Authentication providers as a means of simplifying Authentication for users rather than forcing them to use your own bespoke Authentication process that they will invariably forget their password too...
You haven't mentioned what language or runtime you are considering, however if you do choose to write your own Authentication service, as a bare minimum you should consider implementing an OAuth 2.0 implementation to ensure that your solution adheres to standard practises and is compatible with other providers chould you choose to use them later.
In a .NET based environment I would suggest Identity Server 4 as a base level of security, there are a lot of resources on implementation, other frameworks should have similar projects or providers that you can host yourself. The point is that by using a standard implementation of your own Authentication Service, rather than writing your own one that is integrated into your software you are not re-inventing anything, there is a lot of commercial and community support available to help you minimise the effort and cost to get things up and running.
Conclusion
Ultimately, if you are concerned with Profit, and lets face it most of us are, then the idea that you would re-create the wheel, just because you can adds a direct implementation and long term maintenance Cost and so will directly reduce Profitability, especially when the effort to implement existing Authentication providers into your solution is really low.
Even if you choose today to implement your own Authentication Service, it would be wise to implement it in such a way that you could easily offload that workload to an external provider, it is the natural evolution of security for small to mid sized applications when users start to demand more stringent security requirements or additional features than it is cost effective to provide in your native runtime.
Once security is implemented in your application the rest of the business process generally evolves and we neglect to come back and review authentication until after a breach, if we or the client ever detect such an event, for this reason it is important that we get security as right as we can from the very start of a solution.
Whilst not directly related, this discussion reminds me of my faviourite quote from Eric Lippert in a comment on an SO blog
Eric Lippert on What senior developers can learn from beginners
...The notion that programming can be principled — that we proceed by understanding the abstractions afforded by the language, and then match those abstractions to a model of the business domain of the program — is apparently never taught to a great many programmers. Rather, many programmers proceed as though they’re exploring an undiscovered country, and going down paths more or less at random and hoping they end up somewhere good, no matter how twisted the path is that gets them there...
One of the reasons that we use external Authentication Providers is that the plethroa of developers who have come before us have already learnt the hard lessons on what to do, or not to do and have evolved a set of standards and protocols to provide best practice guidelines on how to protect our users and their data when they are using our software. Many of these external providers represent best practice implementations and they maintain them for us as the standards continue to evolve, so that we don't have to.
I am studying about various types of access control models. So far I have come across MAC, ABAC, and RBAC where RBAC and ABAC are the popular ones. But none of them fit as a complete solution for all real life scenarios.
That is why many times a hybrid model of RBAC and ABAC has been proposed. I am still not able to understand this hybrid model and how this model overcomes the drawbacks of RBAC and ABAC.
ABAC in itself is alone since it can be used to implement RBAC policies. When people refer to a hybrid RBAC/ABAC model they mean that roles and permissions are still managed in an identity management system e.g. an LDAP but that you now rely on policies (e.g. XACML) to drive the actual authorization.
Apps can still use the roles directly but would likely rely on a PEP for authorization decisions.
Not an easy question to answer. Drawbacks are in the eye of the holder. But, there are some commonly held beliefs as to the limitations of RBAC. For example, the 'role explosion' problem. This article, written by me, describes the problem, and how it was solved in the Apache Fortress RBAC solution. Disclaimer, I'm a contributor to the Apache Fortress project.
https://iamfortress.net/2018/07/07/towards-an-attribute-based-role-based-access-control-system/
I'm wondering about additional complexities involved in integrating with Auth0 vs plenty of available code for password complexity rules, UI etc. including using snowflake starter app, for authentication/user creation with open source parse server.
I am sure plenty of people have thought about this and was wondering what the consensus is? Requirement to keep profile/email in sync, relying on 3rd party, inability to customize view and I am sure many other issues.
At first I thought this is great, I would not need to worry about a lot of things, yet there are a lot of other things to worry about and not being able to customize.
What are the most convincing "PRO" answers?
Disclosure: I'm an Auth0 engineer.
TL;DR: I can talk about the pros and cons, but the definitive answer needs to be provided by you.
A bit about Auth0
Auth0 supports the authentication protocols in most widespread use (OAuth2/OIDC, SAML and WS-Federation) so integration into custom software that speaks or can be made to speak by available libraries is relatively easy and friction free. Sidenote, Parse Server does seem to support integration with OAuth compliant identity providers.
It can be used as a standalone identity provider where your users register and authenticate with username/password credentials specific to your application, but it can also integrate with a lot of downstream identity providers like for example Google, GitHub and Twitter. This makes it really easy to enable different methods of authenticating users just by configuration instead of having to directly talk to each individual provider and have to deal with their implementation discrepancies.
Finally, it provides enough extensibility points for you to still have significant degree of control on the authentication experience, for example:
Rules (JS code) allow you to customize the authentication process
Customization of Auth0 provided authentication user interface allows you to still have your own branding
Customization of Lock allows you to have a custom authentication experience integrated in your own app really quick and with very little effort
Of course, no matter how many extension points there's always some stuff that you will not be able to control. This can be seen as bad, but sometimes it's actually a good thing. Depends on the perspective and your specific requirements.
Comparison - Roll your own (RYO) vs Third-party service
On one side you'll have:
cost of acquisition of the service
cost of integration of the service with your product
On the other side you'll have:
cost of implementing the features you need
cost of ownership of those features
cost of integration of the new features in your product
In both cases you'll need some integration work in order to make all the parts fit no matter who created the parts. You could argue that if you are the author of everything it will be easier to fit a square peg in a round hole so lets say RYO wins by a small margin on that point.
It then all comes to comparing cost of acquisition versus the cost of implementation and ownership. I can't answer that one, but the cost of acquisition is generally easy to calculate while the cost of implementing software is very hard to predict; on top of that owning a custom authentication solution also has a heavy toll... you know what they say, no one ever got fired by buying IBM. I won't go that further, but it's safe to say it's easier to find yourself in a pickle if you roll your own security. :)
Conclusions
Go through the Auth0 trial, play with it and see what it has to offer and how that matches your requirements.
If you find something you're not able to accomplish leave a question here tagged auth0 or on Auth0 Forums.
for a proof of concept i want to store rights. I know there are different ways of access control (DAC, MAC, RBAC,..). My first idea was using a database, but I'm looking for some more etablished standards like XACML but unfortunately I have not been able to find some real alternatives.
thanks for any tipps!
First, take a step back and look at comparable items.
In access control you have different models that have come up with time. Historically you first had DAC and MAC. You had the notion of access control lists (also known as identity-based access control or IBAC).
Then suddenly, the sole identity of a user was no longer enough. We started to organize users into roles and groups. That led to the creation of RBAC or role-based access control which NIST formalized into a standard.
Fast forward 10+ years and roles are not enough anymore. ACLs and RBAC are too user-centric. They do not cater for context or relationships. They are not fine-grained enough. A new model called ABAC or attribute-based access control emerges. NIST is also in the process of standardizing ABAC. ABAC is capable of implementing any type of access control requirement and can cater for user, resource, action, and context attributes.
You can read more on ABAC here.
So, what about XACML? XACML - the eXtensible Access Control Markup Language - is an implementation of the ABAC model. It is the most widely spread implementation of ABAC. You ask whether there are alternatives. Some that come to mind include:
SecPal: this is (was?) a Microsoft research initiative. To the best of my knowledge, it is not used outside research.
Permis is a policy-based access control model. It is not widely spread either.
Microsoft has its own language for Windows Server called SDDL. You can read more on that from Microsoft.
IN practice though, most ABAC implementations I have seen use XACML or a mix of home-grown code + RBAC. Needless to say, the latter doesn't really scale well and is hard to maintain.
If you want to learn more, check out the following resources:
my own personal blog
my personal SlideShare
Suppose you are developing a platform which has a web-based interface for its users and APIs for third-party developers. Something similar to Salesforce (or even Facebook).
Salesforce and Facebook, both platforms have normal web-based interface for its users and APIs for third party developers.
Ideally any API internally calls the same function which is being used by the web-based interface. For e.g. "Create a Project" button and "CreateProject" API calls the same "createProject()" function internally. So you can maintain the same version for both as in most cases they are tightly integrated.
Now sometimes you add a feature which makes you increment the minor version of the web-based interface but since you are not implementing that feature in API, API version should remain as is.
How do you handle such cases? Should you maintain separate versions of your web-based interface and APIs for your platform?
It Depends. Because It is difficult to offer a conclusive answer to this question. But I would share some ideas and drill-down some scenarios to help at best.
I would suggest there should be two versions of the api. internal apis and public apis. At a caller's end, they would be two physically distinct apis/end-points so that the security policies and a of lot of other things can be done without exposing much information as well as without relaying any responsibility on code to handle the distinction policy based on who's calling from where as that may quite easilyfail.
It is ok if both versions of the apis consolidate down the line to some extent without involving any security risk but a separate team of expert engineers can design this consolidation to be seamless yet safe. It's a trade-off of between code-reuse and everything else. This is very subjective and can turn into endless discussion. But the software evolves very well as result of this design process if it is agile and iterative.
The apis should be externalizable and inter operable. If the project is very large, then different teams working on separate parts of the project will interact with each other's work using internal apis only. No hanky-panky. No direct data access. Only apis.
This approach will help you create awareness of what's being done in the project across all teams if the apis are discoverable which I personally believe is a very good thing for collaborative team work. In fact it also helps in re-usability. Testing becomes unified and automated. Every team will become responsible for their own work and hence it should be easy to address accountability.
There's more stuff that can go in here but I think this is sufficient at a high level.
IF allowed, I would also read this question as
"Should you have purely service oriented architecture or not ?"
And my answer would be, **It Depends.**Because It is difficult to offer a conclusive answer to this...
Do not publish core function directly via API, instead create all API functions as proxy functions and all changes in core functions will be handled in proxy functions.
Change public api version if there is change in API input/output.
This way you could achieve code re-usability and it does not increase public API version frequently.
Edit:
If you are talking about software version number. My answer is Yes.