Where to double-check attributes of the XACML-request against Attribute-Providers at the PDP? - authorization

I'm evaluation PDP engines and at the moment I give AuthzForce Core a try. Evaluating a Request by the PDP runs pretty solid so far:
//My request and pdp configuration files
File confLocation = new File("D:/docs/XACML/AuthZForce/IIA001/pdp.xml");//pdp.xml tells the pdp where the policies xml files are
File requestFile = new File("D:/docs/XACML/AuthZForce/IIA001/Request.xml");
//I instantiate the pdp engine and the xacml parser
final PdpEngineConfiguration pdpEngineConf = PdpEngineConfiguration.getInstance(confLocation, null, null);
PdpEngineInoutAdapter<Request, Response> pdp = PdpEngineAdapters.newXacmlJaxbInoutAdapter(pdpEngineConf);
XmlUtils.XmlnsFilteringParser xacmlParserFactory = XacmlJaxbParsingUtils.getXacmlParserFactory(false).getInstance();
//I parse the request file
Object request = xacmlParserFactory.parse(requestFile.toURI().toURL());
if (request instanceof Request) {
//At this point I could access all request attributes or alter them
//I let the PDP evaluate the request
Response response = pdp.evaluate((Request) request);
//I check the results inside the response
for (Result result : response.getResults()) {
if (result.getDecision() == DecisionType.PERMIT) {
//it's permitted!
} else {
//denied!
}
}
}
Now, according to the literature like [1] I should not trust the attributes in the given request-xacml-file. Whenever possible, I have to check against a Attribute Provider (e.g. a Patient database) if the given attributes (e.g. patient birthdate) actually belong to the patient in order to prevent attacks.
Otherwise the attacker could make the patient younger in the Request in order to access the patient's record as a parent guardian.
Questions
Is checking Requests against Attribute Providers the task of a PDP or of another entitiy?
Did OASIS specify anything concrete about that issue? E.g. workflow or syntax of configuration files
Is there a way to make my pdp engine aware of Attribute Providers?
Should I just check the provided request on my own before Response response = pdp.evaluate((Request) request);?

I don't know for other XACML implementations, but regarding AuthzForce, Attribute Providers play the role of PIPs in official XACML terms (see the definition of PIP in XACML spec's Glossary), i.e. responsible for getting any additional attribute that is not in the XACML request context (it usually means they are not provided by the PEP originally), whenever the PDP needs it to evaluate the policy. This relates to steps 5-8 of XACML standard data-flow model (§3.1 of XACML 3.0 spec). Besides, if you read the XACML spec carefully, you notice that the actual entity calling the PIPs for the PDP is the so-called context handler. In practice, this is a matter of implementation, the context handler can take many forms. In AuthzForce, it is just a sub-component of the PDP, but you might have one on the PEP side as well which is application-specific, especially in a typical ABAC/XACML scenario where the PDP is a remote service from the PEP's perspective, and the PDP is possibly talking to many PEPs in completely different application environments.
As mentioned previously, for the workflow, look at section 3.1 Data-flow model in the XACML core spec. For the syntax, XACML core specification defines a syntax for policies, authorization decision requests and responses, nothing else at this point. You may find other things in XACML Profiles, but no such thing as configuration syntax, to my knowledge.
In AuthzForce, the PDP engine is made aware of Attribute Providers by the PDP configuration, i.e. the pdp.xml file in your example. You'll need two other files (XML catalog and schema) depending on the Attribute Provider you want to use. This is documented in the Using Attribute Providers section of AuthzForce Core's wiki.
Your code seems like test code to me since you are getting the xacml request from a local file so it seems you have full control over it, so no need to check further. More generally, it depends on the actual use case, really, no universal rule for that. Some attributes (like a subject-id resulting from authentication) are specific and only known by the PEP in its own app environment, so they are the responsibility of the PEP. Some other attributes are more likely the responsibility of the PDP (through attribute providers) if they can be resolved in a central way, such as attributes in a company's directory or other kind of identity repository.

In addition to #cdan's excellent response, here are a few more pointers:
Is checking Requests against Attribute Providers the task of a PDP or of another entitiy?
The PDP always trusts the information (attributes) it receives whether it be from the PEP or from the PIPs. As such the PDP need not verify values it received from a PEP by checking with a PIP. That's counter-productive an inefficient. If you cannot trust the PEP to send the right value, how can you trust it to enforce the right decision?
Did OASIS specify anything concrete about that issue? E.g. workflow or syntax of configuration files
No, we did not. PIP behavior is outside the scope of the XACML spec.
Is there a way to make my pdp engine aware of Attribute Providers?
Should I just check the provided request on my own before Response response = pdp.evaluate((Request) request);?
The PDP should be configured with PIPs. The PDP will use all the PIPs it can.

Related

Differentiating between 404 types

I know the 404 vs 204 debate has been beaten to death, and I understand the argument for using 404 when there is no record in the table corresponding to a REST endpoint request, but it feels like there should be some way of differentiating between "This endpoint is malformed" and "there is no record in the table." For example if I have an endpoint like this:
https://mycloudfront.cloudfront.com/api/my-table/{userId}
Is there a recommended way of configuring error handling on the backend to differentiate between "no resource found because there is no entry for userId" and "no resource found because there is no table named my-table" or "no resource found because there is no cloudfront distribution named mycloudfront"?
I ask, because it would be nice on the frontend to inform the end user whether or not their request did not produce the desired result because they have no data in the table (in which case I would display a message encouraging them to take an action that would generate data) or because something went wrong (in which case I would display an error message).
it would be nice on the frontend to inform the end user whether or not their request did not produce the desired result because they have no data in the table
That's what the response body is for.
Except when responding to a HEAD request, the server SHOULD send a representation containing an explanation of the error situation, and whether it is a temporary or permanent condition. RFC 9110.
Status codes are metadata in the transfer-of-documents-over-a-network domain (Webber, 2011) - the information indicates to general purpose web components (browsers, proxies, caches, spiders....) the semantics (meaning) of the fields and response body (ex: does the message include a representation of a resource or a representation of an error?)
Bespoke HTTP message handlers (and human operators) are expected to look for information in the body (ex: a 404 for a web page returns a picture of a fail whale and a bunch of links to different resources that might clarify what's gone wrong).
You can also leverage ideas like web linking (RFC 8288), if you want to describe relationships between the error and other resources.
Problem Details (RFC 7807) describes a standardized JSON schema for communicating error information, if you want a JSON representation but prefer not to do all of the schema design yourself.
First and foremost, REST has no endpoints but resources.
there should be some way of differentiating between "This endpoint is malformed" and "there is no record in the table."
By "This endpoint is malformed" I guess you probably mean the request issued to the server doesn't conform to the HTTP specification. As voice already mentioned, HTTP status code are coordination metadata for the outcome of the transportation and not necessarily the outcome of your business logic. Of course you need to come up with a mapping for problems you noticed while applying your business logic to the HTTP transport domain.
Unfortunately, REST is polluted with false assumption and believes. Plenty of people seem to think of it as HTTP based CRUD mostly done with JSON payloads. But this is just a very tiny fraction of what REST really is. At its heart it is a technique used in distributed computing to help decouple clients from server to allow the latter to evolve freely in future. Clients on the other hand are build with the inherent design decision of a possible change in mind and therefore get much more robust towards change in the end.
So, how does REST help to decouple clients from servers?
First, the spelling of a URI is not of importance. The URI needs to be a valid one but that's it basically. Clients shouldn't parse the URI or try to extract some knowledge off the URI nor does a URI pattern like /api/user/1 and /api/user/1/stuff mean that both of those URIs are somehow related. That's what link-relations are there for.
Next, in order to teach a client what an URI returned by the server is good for URIs should come with one or multiple link-relation names, which should either be based on registered ones or at least follow the Web Linking extension mechanism, which basically is just a further URI that does not necessarily need to point to a valid resource. Treat it like a predicate in a (SemWeb) ontology.
Use forms similar to HTML, like HAL-forms, JsonForms or Ion, if your server needs further input from clients. Forms also teach clients on what HTTP methods to use, which URI to send the request to, what media-type to encode the request in and of course a description of the properties the resource has and/or the server expects input for. This information is enough to let a client send valid HTTP requests in terms of the transport domain to the server. Note that this does not mean that there won't be any issues then. Requests still might fail to reach the server due to internet outage on whatever end, the request being routed badly and exceed the maximum number of allowed hops and so on but depending on the HTTP method used for sending the request a client might automatically reissue a request once it hit its timeout threshold.
In order to increase interoperability of any peer in a REST ecosystem REST has a strong focus on media types. Think of it as the binding contract between a client and a server which should be negotiated between both of them. This guarantees that both are capable of exchanging "messages" both understand and are able to process. One of the difference to regular RPC services here though is that RPC services are usually restricted to one payload mechanism while REST supports more or less an unlimited amount of payloads, depending on its support for various media-types. Media types are a human-readable description on how payload should be encoded and processed and also contains information, besides the syntax description of allowed elements, a semantic description on the purpose of the respective elements. A payload issued for plain application/json doesn't teach a client really what the properties of the respective JSON objects used in the payload mean nor does it really support URIs in first place. Note however that issuing a plain JSON request to the server is fine if the client was "instructed" that way using a form the client was acting upon. The server here just expects that kind of payload then. Just look at how a typical HTML document is build up and read up on some of the tag definitions that are used within the HTML document and you might get the gist of this paragraph.
Especially about the latter two points Fielding himself was quite vocal about in his famous rant:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types ...
So, back to the actual question at hand. Is "there is no record in the table" really a business logic error? You could also design it to return what's currently available there and return an empty list. This at least spare you the hazzle of mapping that business error onto the transportation domain in that case.
If you want or need to express a business logic failure to the client you should, as voice also recommended before, look into application/problem+json (or its XML alternative application/problem+xml) which define properties such as type of failure, general title, status and details among others. The respective type the response is issued for may define further properties specific to that type that are part of the payload. I.e. you may define an extension type of http://acme.com/problem/validation and this extension type defines that the payload needs to contain a target-ref property to identify the element that failed the validation check as well as a property for the actual error message.
In the end some general recommendations in terms of REST are:
Design the interactions of client and servers first as if you'd interact with a typical human-focused Web page and then translate the interaction steps onto the application domain. REST in the end is nothing more than a generalized approach for how we humans interact on the Web for decades. REST is basically Web surfing for applications rather than humans. As we humans follow an outlined state machine of i.e. Amazon.com to order some books, computers can do the same. Therefore design the whole interaction between client and server as state machine that clients just follow along and may exit at certain points
Allow servers to teach clients what they need to know using various form-support and use link-relations to set given URIs in context to the current resource

How to add dynamic authorization to JAX-RS service?

I am trying to secure REST services by adding authorisation. For example, all customers are allowed to call /rest/{custno}/machines/{machno} but they are only allowed to see the machines which they own.
I see that there are annotations like #RolesAllowed but that doesn't help in this case.
I have tried using Interceptors and this seemed to work on Websphere8.5 but is not working on Tomcat 7 or 8. The interceptor was able to get the customer info from the session and from the path and ensure that they are the same or that the user has admin rights. It was quite nice to be able to generate an overview using the annotations to see how each service is secured.
What is a typical approach to this kind of problem?
You should use abac/xacml which will provide you with
An architecture
A policy language (XACML or alfa)
A request/response protocol to query for authorization.
Let's start with the policy.
For example, all customers are allowed to call /rest/{custno}/machines/{machno} but they are only allowed to see the machines which they own.
In pseudo-policy, using ALFA, this would become
/**
* Control access to machines
*/
policyset machines{
target clause objectType == "machine"
apply firstApplicable
/**
* View machines
*/
policy viewMachines{
target clause actionId == "view"
apply firstApplicable
/**
* Users are only allowed to see the machines which they own.
*/
rule usersCanViewTheirOwnMachines{
permit
condition machine.owner == username
}
}
}
The nice thing with this approach is that you need not write any code for this. All of the authorization logic is done inside the policy.
Now, let's talk architecture. You will need:
an interceptor or policy enforcement point (PEP) which in your case would be a JAX-RS filter or interceptor. The interceptor will call out to the authorization service to verify the policies.
an authorization service also known as a Policy Decision Point (PDP) which will process the request you sent against the policies it knows such as the one you just wrote.
Additional reading
ALFA on Wikipedia
XACML Architecture
XACML on Wikipedia

When using ABAC security, how do you look up rules?

When implementing ABAC/XACML, the spec indicates you should intercept requests for sensitive data with PEPs, which route the request to PDPs (the PEPs include attributes about the subject, environment, resource and action when calling the PDP).
The PDP then determines what rules need to be evaluated for an access decision.
From Wikipedia:
https://en.wikipedia.org/wiki/XACML
XACML provides a target,[5] which is basically a set of simplified conditions for the subject, resource, and action that must be met for a policy set, policy, or rule to apply to a given request. Once a policy or policy set is found to apply to a given request, its rules are evaluated to determine the access decision and response.
Policy set, policy and rule can all contain target elements.
It's my understanding that how the PDP decides what rules in the PIP are applicable is implementation specific, but this seems like a very important part of the process-- if you miss a rule, for instance, you would not evaluate the request properly. What ways have folks implemented this? What's worked and what hasn't? (I'm reluctantly leaning towards lookups against an EAV-ish table.)
You always configure a PDP with a set of policies. You can give the PDP any number of policies and policy sets (groups of policies) but you must specify the entry point, i.e. there must be a root policy. That root policy may then contain and / or link to other policies (or policy sets).
The PDP alone decides which policies to invoke and evaluate based on the incoming request from the PEP. The PEP does not know how many policies there are. You would not miss a rule like you state in your question. It is the PDP's responsibility not to. You would not typically implement your own PDP. You would use an off-the-shelf one. There are several open-source engines e.g. SunXACML and commercial alternatives e.g. Axiomatics.
The PIP is used for attribute value retrieval, not for policy retrieval.

Is there a standard or preferred way to use obligations and advice in XACML and ALFA?

I wrote some obligations and advices but I was wondering if there is a widely accepted/or formal way to do this properly? In other words: Is there a standard or preferred way to use obligations and advices in ALFA?
I would really like to see an example how to define an obligation (e.g. to log every request) and its content, in a layered policy that will always be triggered (on every request) both on deny and permit?
Or do you have to define a separate obligation for every Policyset/policy and rule?
Do you have to define the exact content of such an obligation or is this depending on the functionality of the PEP?
This is a great question.
While the specification (all versions) do define the structure of an obligation and even advice in the case of XACML 3.0, the specification doesn't mention how the PEP (policy enforcement point) is to implement the obligation. All the specification mentions is what should happen if a PEP fails to implement an obligation i.e. what happens to the decision.
From a PEP code perspective, a best practice would be to write an ObligationHandler interface which you can implement for different obligations. The constructor for classes implementing the ObligationHandler interface would take the XACML request and response.
Example
obligation emailManager = "com.axiomatics.example.obligations.emailmanager"
policy documentAccess{
apply firstApplicable
rule allowAccessIfClearanceSufficient{
condition user.clearance>document.classification
permit
on permit {
obligation emailManager{
email = email
message = stringConcatenate("Employee ",
stringOneAndOnly(Attributes.subjectId),
" has obtained access to ",
stringOneAndOnly(Attributes.resourceId)
)
}
}
}
}
Other resources:
XACML Obligations
Enhancements and new features in XACML 3.0

Implementing versioning a RESTful API with WCF or ASP.Net Web Api

Assume i've read a lot about versioning a restful api, and I decided to not version the the service through the uri, but using mediatypes (format and schema in the request accept header):
What would be the best way to implement a wcf service or a web api service to serve requests defining the requested resource in the uri, the format (eg. application/json) and the schema/version (eg player-v2) in the accept header?
WCF allows me to route based on the uri, but not based on headers. So I cannot route properly.
Web Api allows me to define custom mediatypeformatters, routing for the requested format, but not the schema (eg. return type PlayerV1 or PlayerV2).
I would like to implement a service(either with WCF or Web Api) which, for this request (Pseudo code):
api.myservice.com/players/123 Accept format=application/json; schema=player-v1
returns a PlayerV1 entity, in json format
and for this request:
api.myservice.com/players/123 Accept format=application/json; schema=player-v2
returns a PlayerV2 entity, in json format.
Any tips on how to implement this?
EDIT: To clarify why I want to use content negotiation to deal with versions, see here: REST API Design: Put the “Type” in “Content-Type”.
What you are bringing here does not look to me as versioning but it is is more of content negotiation. Accept header expresses wishes of the client on the format of the resource. Server should grant the wishes or return 406. So if we need more of a concept of Contract (although Web API unline RPC does not define one) then using resource is more solid.
The best practices for versioning have yet to be discussed fully but most REST enthusiast believe using the version in the URL is the way to go (e.g. http://server/api/1.0.3/...). This also makes more sense to me since in your approach using content negotiation server has to keep backward compatibility and I can only imagine the code at the server will get more and more complex. With using URL approach, you can make a clean break: old clients can happily use previous while new clients can enjoy the benefits of new API.
UPDATE
OK, now the question has changed to "Implementing content-negotiation in a RESTful AP".
Type 1: Controller-oblivious
Basically, if content negotiation involves only the format of the resource, implementing or using the right media type formatter is enough. For example, if content negotiation involves returning JSON or XML. In these cases, controller is oblivious to content negotiations.
Type 2: Controller-aware
Controller needs to be aware of the request negotiation. In this case, parameters from the request needs to be extracted from the request and passed in as parameter. For example, let's imagine this action on a controller:
public Player Get(string schemaVersion)
{
...
}
In this case, I would use classic MVC style value providers (See Brad Wilson's post on ValueProviders - this is on MVC but Web API's value provider looks similar):
public Player Get([ValueProvider(typeof(RequestHeadersSchemaValueProviderFactory))]string schemaVersion)
{
...
}