SCIM 2.0 minimal requirement - scim

We are currently implementing a SCIM 2.0 based on the rfc7643; our current user database doesn't contains any group, either group management obviously.
We would like to know what is mandatory in order to be compliant with the industry .
Can we simply implement the user end-points or shall we implements the group/resources end-points as well.
Thanks for your help

The only way to know what is mandatory is to carefully read the relevant RFCs (RFC7643 and RFC7644).
In terms of your specific questions: firstly, whether implementation of groups is mandatory: RFC7643 section 4 says that:
This section defines the default resource schemas present in a SCIM
server. SCIM is not exclusive to these resources and may be extended
to support other resource types
In my reading, that implies that support for the Group schema as defined in that RFC in section 4.2, and the "/Groups" endpoint defined in RFC 7644 section 3.2, is mandatory. (Admittedly, the RFCs would be clearer on this point, if they actually used the word "MUST" in those sections.)
However, to my knowledge the SCIM standard nowhere requires that any groups actually exist, nor that any particular client have privilege to create them. So, if your server had a dummy implementation of groups, in which the Group schema exists, along with the /Groups endpoint, but no groups actually exist, and any attempt to create a group results in a HTTP Error 403 – that would comply. Another (somewhat more involved) option might be to have a single group "Everyone" of which everyone is automatically a member, but to reject any attempts to create further groups or remove any user from the "Everyone" group with a 403 error.
You also ask whether the filtering parameters are mandatory. RFC 7644 section 3.4.2.2 states that implementation of the filter= parameter is OPTIONAL. However, the startIndex= and itemsPerPage= parameters are not filtering parameters, rather they are pagination parameters. [Section 3.4.2.4]https://www.rfc-editor.org/rfc/rfc7644#section-3.4.2.2) defines pagination parameters, and unlike filtering (and sorting), it does not explicitly define it as OPTIONAL, which implies it is mandatory. Furthermore, the service provider config schema RFC 7643 section 5 defines attributes which the server can use to indicate which optional features it supports (e.g. patch, bulk, filter, sort, etc); there is no attribute to indicate whether the server implements pagination, which is further evidence that it is mandatory.

Related

Could REST API OPTIONS be used as the HATEOAS only request?

As I've understood it, REST MUST use the HATEOAS constraint to be implemented properly. My understanding of HATEOAS is that basically every resource should share information about what communication options it has and how the consumer can use those options to achieve their end goal.
My question is if the HTTP OPTIONS method could be used as a way to navigate a REST API. Basically the response from an OPTIONS request would include the possible actions to take on a resource which would make it possible to consume the API without knowing the endpoints.
e.g.
An initial request to the API
HTTP OPTIONS /api
Could return all resources available for consumption and their relations. Like a massive tree to consume the API and all it has to offer. This idea doesn't neglect implementing HATEOAS on other responses as well, but the OPTIONS request would allow navigation without returning data that the consumer might not actually want to consume.
Is this a really bad idea? Or is it something that is commonly implemented. I'm currently attempting to implement a REST API but I'm having a hard time understanding the benefit of HATEOAS if there is no way to navigate the API without actually requesting data that you might not necessarily need when consuming certain end points. And I assume HATEOAS aims to make clients consume resources by their relation and not actually hard coding the end point?
TL;DR
Could HTTP OPTIONS request act as a way to navigate a REST API by returning what communication options are available for the requested resource without actually returning the resource?
According to RFC 7231
The OPTIONS HTTP method requests information about the communication options available for the target resource, at either the origin server or an intervening intermediary. This method allows a client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action.
...
A server generating a successful response to OPTIONS SHOULD send any header fields that might indicate optional features implemented by the server and applicable to the target resource (e.g., Allow), including potential extensions not defined by this specification. The response payload, if any, might also describe the communication options in a machine or human-readable representation. A standard format for such a representation is not defined by this specification, but might be defined by future extensions to HTTP. A server MUST generate a Content-Length field with a value of "0" if no payload body is to be sent in the response.
So, basically a response to an OPTIONS request will tell your client which HTTP operations may be performed on a certain resource. It is furthermore admissible to target the whole server on utilizing * instead of a specific resource URI.
A response to an OPTIONS request may look like this:
HTTP/1.1 204 No Content
Allow: OPTIONS, GET, HEAD, POST
Cache-Control: max-age=604800
Date: Thu, 13 Oct 2016 11:45:00 GMT
Expires: Thu, 20 Oct 2016 11:45:00 GMT
Server: EOS (lax004/2813)
x-ec-custom-error: 1
which states that a certain resource supports the mentioned operations in the Allow header of the resonse. Via the Cache-Control header a client knows that it by default can cache responses of safe requests (GET and HEAD) for up to 7 days (value is mentioned in seconds). The x-ec-custom-error header specifies a non-standard header that is specific to a particular software, in that particular case to a ECS Server. According to this Q & A the meaning isn't publicly documented and therefore application specific.
In regards to returning a tree of traversable resources from the given resource the OPTIONS operation was requested for, technically this could be possible, however, certain systems might produce an almost never-ending list of URIs. Therefore such a design is questionable for larger systems.
My understanding of HATEOAS is that basically every resource should share information about what communication options it has and how the consumer can use those options to achieve their end goal.
Hypertext as the engine of application state (HATEOAS) is basically just a requirement to use the interaction model used on the Web for decades quite successfully and offer the same functionality to applications. This enabled applications to surf the Web similar like we humans do.
Great, but how does it work?
On the Web we use links and Web forms all the time. Through a Web form a server is able to teach a client basically what properties a certain resource supports or expects. But that's not all! The same form also tells your client where to send the request to (target URI), the HTTP method to use and, usually implicitly given, the media type the payload needs to be serialized to upon sending the request to the server. This, in essence, makes out-of-band API documentation unnecessary as all the information a client needs to make a valid request is given by the server already.
On a typical Web site you might have a table of entries which offers the option to add new entries, update or delete existing ones. Usually such links are hidden behind fancy images, i.e. a dustbin for deleting an entry and a pencil for editing an existing entry or the like where the image represents an affordance. The affordance of certain elements make it clear what you should do with it or what's the purpose of that element. A button on a page wants to be pushed while a slider widget wants to be changed while a text field waits for user input. As applications aren't that eager to work on images a further concept is used instead. Link relation names exactly serve this purpose. I.e. if you have a pageable collection consisting of multiple page à 25 entries i.e. you might be familiar with a widget containing arrows to page through that collection. A link here should usually be annotated with link relation names such as self, next, prev, first or last. The purpose of such links is quite clear, some others like prefetch, that indicates that a resource can be loaded in the background early as it is very likely that the next action may request it, might be less intuitive at first. Such link relation names should be standardized or at least follow the Web Linking extension mechanism.
Through the help of link-relation names a client that knows to look for URIs annotated with next i.e. will still work if the server decides to change its URI scheme as it treats the URI rather opaque.
Of course, both client and server need to support the same media type that furthermore is able to represent such capabilities. Plain application/json is i.e. not able to provide such a support. HAL JSON or JSON Hyper-Schema at least add support for links and link relation names to JSON based documents, while hal-forms, halo+json (halform) and ion might be used to teach a client how a request needs to be created. The question here shouldn't be which media type to support but how many different ones you want to support as the more media types your API is able to handle, the more likely it will be to interact with arbitrary clients not under your control.
These concepts allow you to basically use the controls given in the server response to "drive your workflow" forward. In essence, what you, as an API designer should do is to design the interactions of a client with your API so that it follows a certain, as Jim Webber termed it, domain application protocol or state machine as Asbjørn Ulsberg put it that basically guides a client through its task, i.e. ordering from your shop API.
So, in short, HATEOAS is comparable to Web surfing for applications by making use of named link relations and form-like media type representations that allow you to take actions solely on the response retrieved from a server directly instead of having to bake external knowledge stemming from some reference documentation page (Swagger, OpenAPI or the like) into your application.
But how does HATEOAS benefit the consumer in practice then?
First, it does not have to consult any external documentation other maybe than the current media type specification, though usually support for well-known media types is already backed into popular frameworks or at least allows to add support through plugins or further libraries. Once the media type is understood and supported interactions with all serivces that also support the same media type is possible, regardless of their domain. This allows to reuse the same client implementation to interact with service A and service B out of the box. In an RPC-like systems you'd need to integrate the API of service A first and if you want to interact with service B also you need to integrate those API separately. It's most likely that these APIs are incompatible and thus don't allow the reusage of the same classes.
Without knowing the URL for a resource, is the idea that the consumer can discover it by browsing the API, but they will still have a hard dependency on the actual URL? Or is HATEOAS purpose to leverage actions on a certain resource, i.e. the consumer knows the users end-point but he does not need to know the end-points for actions to take on the users resource cause those are provided by the API?
A client usually does not care about the URI itself, it cares about the content a URI may provide. Compare this to your typical browsing behavior. Do you prefer a meaningful text that summarizes that links content so you can decide whether to request that resource or do you prefer parsing and dissecting a URI to learn what it might do? Minifying or obfuscating URIs will do you no favor in the latter case though.
A further danger arise from URIs and resources that a client put meaning to. A slopy developer will interpret such URIs/resources and implement a tiny hack to interact with that service assuming the URI/resource will remain static. I.e. it is not unreasonable to consider a URI /api/users/1 to return some user related data and based on the response format a tiny Java class is written that expects to receive a field for username and one for email i.e.. If the server now decides to add additional data or rename its fields, the client suddenly will not be able to interact with that service further. And rest assured that in practice, especially in the EDI domain, you will have to interact with clients that are not meant to interact with the Web or where programmers implemented their own JSON framework that can't coope with changing orders of elements or can't handle additional optional fields, even though the spec contains notes on those issues. Fielding claimed that
A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names. [ditto] (Source)
Instead of typed resources content type negotiation should be used to support interoperability of different stackholders in the network.
As such, the URI itself is just the identifier of a resource that is mainly used to learn where to send a request to. Through the help of meaningful link relation names a client should know that it is interested in i.e. http:/www.acme.com/rel/orders if it wants to send an order to the service and just looks up the URI that either is annotated with that Web Linking extension realtion name or that has an URI attached to it. Whether the link relation name is just an annotation (i.e. a further attribute on the URI element) or the URI being attached to the link-relation name (i.e. as an embedded object of the link relation name) is dependent on the actual media type. This way, if a server ever decides to change its URI scheme or move around resources, for whatever reason, the client will still be able to find the URI that way and it couldn't care less about the characters present in the URI or not. It just treats the URI as opaque thing. The nice thing here is, that a URI can be annotated with multiple link relation names simultaneously, which allows a server to "offer" that URI to clients that support different link-relation names. In the case of forms the URI to send the request to is probably contained in the action attribute of the form element or the like.
As you hopefully can see, with HATEOAS there is no need for a hard dependency on URIs, if so there may be a dependency on the link-relation name though. It still requires URIs to learn where to send the request to, but through looking up the URI via its accompanying link relation name you make the handling of URIs much more dynamic as it allows a server to change the URI anytime it wants to or has to.

What are some of the well known products/services around Authorization (RBAC and ABAC) that implement standards like XACML?

What are some of the well known products/services around Authorization (RBAC and ABAC) that implement standards like XACML?
Our customers are organizations.
Each organization would be given 3 default roles (after onboarding)
but also have the capability to create more roles
Roles define level of access on the under lying resources (not just the API level (which is via scopes) but at the resource level)
Another use case is that of superuser who can act across organizations and perform any action.
Please share your thoughts on if these use cases can be solved (and the ease) in the product or service you recommend. thanks.
You can find a list of XACML implementations on the dedicated Wikipedia page. To address your use case which is very RBAC-oriented, I would use the RBAC Profile of XACML, so make sure the implementation you choose supports that.
cdan is right. Start with the Wikipedia page for XACML (and the ones for ABAC and ALFA) which list implementations but also use cases. You have quite a broad range of commercial and open-source alternatives.
In ABAC, you tend to try to write authorization policies independently of the underlying technology. This means that whether access is via APIs or via a webpage should not matter in defining the authorization.
The key questions you want to ask yourself are:
Are there relationships in authorizations? E.g.
a user with role='manager' can do action = 'view' on object = 'record' if object.organization == user.organization.
Do I care about auditing the authorization? Do I need to prove my compliance?
Do I need context and runtime attributes e.g. time and location?
Do I need to apply the same authorization logic across multiple apps, APIs, and data?
Do I need to regularly update my authorizations?
If you answered Yes to one or more of the above questions, you likely need ABAC.

REST HATEOAS: How to know what to POST?

I still don't understand how the client knows what data to POST when creating a resource. Most tutorials/articles omit this and in their examples, a client always seems to know a priori what to post (i.e. using out-of-band information). Like in this example, the consumer knows that he has to place the order by setting what <drink\> he wants.
I can only image a few approaches and I don't know if they are valid:
1. Returning an empty resource
The client discovers a link to /resource with a link to /resource/create and relation "create". A GET to /resource/create returns an empty resource (all attributes are empty) and a link to /resource/create with relation "post". The client then sets values to all attributes and POSTs this to /resource/create which returns a 201 (Created). This means that the CRUD operations are not located at the resource endpoint but to URI like /resource/create and that the client might set attributes the server ignores (like a creation date which is set on the server side)
2. Returning a form
Basically the same approach as above, despite the fact that not a resource is returned but some meta-information about what fields to post and what datatypes each attributes needs to have. Like in this example. Still, the creation endpoint is not located at /resource but on /resource/create
3. Creating by updating
A POST to /resource immediatly creates an empty resource and returns a link to this resource. The client then can follow this link to update the resource with the necessary data doing PUTs.
So what is the best approach that still follows the HATEOAs paradigm and why are all of these tutorials (and even books like REST in Practice) omitting this problem?
UPDATE:
I recently found out the Sun Cloud API seems to be pretty close to an "ideal" REST HATEOAS API. It not only defines some resources and does hyperlinking between them, it also defines media types and versioning. With all this theoretical discussion, it's pretty good to have a concrete exmaple. Maybe this helps some readers of this question.
Most tutorials and books about REST are very misleading, because there are many misconceptions about REST and no authoritative source other than Fielding's dissertation itself, which is incomplete.
REST is not CRUD. A POST is not a synonym to CREATE. POST is the method to be used for any action that isn't already standardized by HTTP. If it's not standardized by HTTP, its semantics are determined by the target resource itself, and the exact behavior has to be documented by the resource media-type.
With HATEOAS, a client should not rely on out-of-band information for driving the interaction. The documentation should focus on the media-types, not on the URIs and methods. People rarely get this right because they don't use media-types properly, and instead document URI endpoints.
For instance, in your example, everything has the application/xml media-type. That's the problem. Without proper media-types, there's no way to document resource-specific semantics when everything has the same media-type without relying on URI semantics, which would break HATEOAS. Instead, a drink should have a media-type like application/vnd.mycompany.drink.v1+xml, and your API documentation for that media-type can describe what to expect when using POST with a rel link.

REST API: How to name a derived resource?

There is a gazillion of questions about RESTful interface naming conventions, esp. around singular vs plural resource names. A somewhat convention is:
GET /users Retrieve collection of users
GET /users/{id} Retrieve user
POST /users Create user
PUT /users/{id} Update user
DELETE /users/{id} Delete user
However, the above does not work when resource is a value derived from the environment.
My hypothetical application has the following endpoint:
GET /source Get information about the source of the query.
That responds with:
Referrer URL
Remote IP
Since source is derived from the environment, there is never more than one source, therefore calling the resource sources or providing sources/{foo} lookup is not practical.
Does REST style propose how to handle naming of these entities?
Dr. Fielding notes in section 6.2.1 of his famous dissertation :
..authors need an identifier that closely matches the semantics they
intend by a hypermedia reference, allowing the reference to remain
static even though the result of accessing that reference may change
over time.
Therefore, it makes sense to use plain source endpoint.
It would be a different thing if you wanted to provide more general service around IP address provided, like this one.

Implementing Access Control in a system

I came across many different models for the Access Control in a system. When implementing an Access Control model in any system, we usually hard code the rules/rights in the database(considering an RDBMS) by creating separate tables for the Access Control. Also, these rules/rights can be stored in an XML database.
I would like to know what is the difference between storing the rules on RDBMS and on an XML database? Also, when should we use XACML for implementing an Access Control model in a system? I mean, how one can decide whether one should hardcode the rules/rights in the database or one should use XACML policy language?
Thanks.
Disclaimer: I work for Axiomatics, a vendor implementation of XACML
Storing authorization logic if you go your own way could be done either in the RDBMS or in an XML database. It doesn't matter. I doubt that XML brings you any added capabilities.
Now, if you want an authorization system that can cater for RDBMS systems and other types of applications too (CRM, .NET, Java...) then you want to use a solution that is agnostic of the type of application it protects. That's the goal of XACML, the eXtensible Access Control Markup Language.
XACML provides attribute-based, policy-based access control (ABAC and PBAC). This gives you the ability to write extremely expressive authorization policies and managed them centrally in a single repository. A central authorization engine (called Policy Decision Point or PDP) will then serve decisions to your different applications.
As Bell points out, the minimum set of attributes you will need is typically attributes about the user (Subject), the resource, and the action. XACML also lets you add environment attributes. This means you can write the following type of policy:
Doctors can view the medical records of patients they are assigned to.
Doctors describes the user / subject
view describes the action
medical records describes the targeted resource
of patients describes the targeted resource too. It's metadata about the resource
they are assigned to is an interesting case. It's an attribute that defines the relationship between the doctor and the patient. In ABAC, this gets implemented as doctor.id==patient.assignedDoctorId. This is one of the key benefits of using XACML.
Benefits of XACML include:
- the ability to externalize the authorization logic as mentioned by Bell
- the ability to update authorization logic without going through a development/deployment lifecycle
- the ability to have fine-grained authorization implemented the same way for many different applications
- the ability to have visibility and audits on the authorization logic
HTH
The two are not mutually exclusive.
An XACML policy describes how to translate a set of attributes about an attempted action into a permitted/denied decision. At minimum the attributes would be who the user is (Subject), what they are trying to do (Action) and what they are trying to do it to (Object). Information such as time, the source of the request and many others can be added.
The attributes of the user and the object will still have to be stored in the database. If you are grouping users or objects to simplify administration or to simplify defining access control rules then you're going to have to manage all of that in the database to. All that data will then need to be passed to the XACML Policy Decision Point to return the permit/deny decision.
The advantage of using XACML to define these rules, rather than writing your own decision logic for the rules defined in the database, is that the assessment of the rules can be handed off to an external application. Using a mature, tested XACML implementation (there are open source options) will avoid you making any mistakes in building the checks into your own code.
Hardcoding the policies in your code is a very bad practice I think. In that case you mix the business logic of your resources and the permission check of the access control system. XACML is a big step in the right direction because you can create a fully automatic access control system if you store your rules in a separated place (not hardcoded in the business logic).
Btw you can store that rules in the database too. For example (fictive programming language):
hardcoded RBAC:
#xml
role 1 editor
#/articles
ArticleController
#GET /
readAll () {
if (session.notLoggedIn())
throw 403;
if (session.hasRole("editor"))
return articleModel.readAll();
else
return articleModel.readAllByUserId(session.getUserId());
}
not hardcoded ABAC:
#db
role 1 editor
policy 1 read every article
constraints
endpoint GET /articles
permissions
resource
projections full, owner
role 2 regular user
policy 2 read own articles
constraints
endpoint GET /articles
logged in
permissions
resource
projections owner
#/articles
ArticleController
#GET /
readAll () {
if (session.hasProjection(full))
return articleModel.readAll();
else if (session.hasProjection(owner))
return articleModel.readAllByUserId(session.getUserId());
}
As you see the non hardcoded code is much more clear than the hardcoded because of the code separation.
The XACML is a standard (which knows 10 times more than the example above) so you don't have to learn a new access control system by every project, and you don't have to implement XACML in every language, because others have already done it if you are lucky...