Should the schema be altered if the server does not handle all attributes? - scim

If our SCIM server only handles a small subset of the attributes in the core User schema and ignores most other attributes:
Should the server return a reduced schema that reflects what is supported on the schemas endpoint?
Or should it return the full default core schema definition?
And if the schema is altered to reflect what the server actually supports, should it still be named urn:ietf:params:scim:schemas:core:2.0:User, or does it need to get a different name?

Should the server return a reduced schema that reflects what is supported on the schemas endpoint?
Yes.
Or should it return the full default core schema definition?
No.
Service providers are free to omit attributes and change attribute characteristics, provided it does not change any other requirements outlined in the RFC nor redefine the attributes. The purpose of the discovery endpoints, including "/Schemas", is to provide service providers the ability to specify their schema definitions.
And if the schema is altered to reflect what the server actually supports, should it still be named urn:ietf:params:scim:schemas:core:2.0:User, or does it need to get a different name?
Provided you meet the above criteria, the schema should continue to be named urn:ietf:params:scim:schemas:core:2.0:User. But, you should use custom resources and/or extensions for new attributes/resources not defined in the RFC.
I agree that the RFC could perhaps be more clear about this, but there are some hints throughout, such as the following from Section 2:
SCIM's support of schema is attribute based,
where each attribute may have different type, mutability,
cardinality, or returnability. Validation of documents and messages
is always performed by an intended receiver, as specified by the SCIM
specifications. Validation is performed by the receiver in the
context of a SCIM protocol request (see [RFC7644]). For example, a
SCIM service provider, upon receiving a request to replace an
existing resource with a replacement JSON object, evaluates each
asserted attribute based on its characteristics as defined in the
relevant schema (e.g., mutability) and decides which attributes may
be replaced or ignored.
Additional references:
https://www.ietf.org/mail-archive/web/scim/current/msg02851.html
SCIM (System for Cross-domain Identity Management) core supported attributes

Related

Multi-tenancy in Golang

I'm currently writing a service in Go where I need to deal with multiple tenants. I have settled on using the one database, shared-tables approach using a 'tenant_id' decriminator for tenant separation.
The service is structured like this:
gRPC server -> gRPC Handlers -
\_ Managers (SQL)
/
HTTP/JSON server -> Handlers -
Two servers, one gRPC (administration) and one HTTP/JSON (public API), each running in their own go-routine and with their own respective handlers that can make use of the functionality of the different managers. The managers (lets call one 'inventory-manager'), all lives in different root-level packages. These are as far as I understand it my domain entities.
In this regard I have some questions:
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC
API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header?
Really hope I'm asking the right questions.
Regards, Karl.
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
ORMs in Go are a controversial topic! Some Go users love them, others hate them and prefer to write SQL manually. This is a matter of personal preference. Asking for specific library recommendations is off-topic here, and in any event, I don't know of any multi-tenant ORM libraries – but there's nothing to prevent you using a wrapper of sqlx (I work daily on a system which does exactly this).
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
It would make sense to abstract this behavior from those internal services in a way which suits your programming and interface schemas, but there's no further details here to answer more concretely.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
context.Context is mostly about cancellation, not request propagation. While your use is acceptable according to the documentation for the WithValue function, it's widely considered a bad code smell to use the context package as currently implemented to pass values. Rather than use implicit behavior, which lacks type safety and many other properties, why not be explicit in the function signature of your downstream data layers by passing the tenant ID to the relevant function calls?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header? [sic]
The gRPC library is not opinionated about your design choice. You can use a header value (to pass the tenant ID as an "ambient" parameter to the request) or explicitly add a tenant ID parameter to each remote method invocation which requires it.
Note that passing a tenant ID between your services in this way creates external trust between them – if service A makes a request of service B and annotates it with a tenant ID, you assume service A has performed the necessary access control checks to verify a user of that tenant is indeed making the request. There is nothing in this simple model to prevent a rogue service C asking service B for information about some arbitrary tenant ID. An alternative implementation would implement a more complex trust-nobody policy whereby each service is provided with sufficient access control information to make its own policy decision as to whether a particular request scoped to a particular tenant should be fulfilled.

Where to double-check attributes of the XACML-request against Attribute-Providers at the PDP?

I'm evaluation PDP engines and at the moment I give AuthzForce Core a try. Evaluating a Request by the PDP runs pretty solid so far:
//My request and pdp configuration files
File confLocation = new File("D:/docs/XACML/AuthZForce/IIA001/pdp.xml");//pdp.xml tells the pdp where the policies xml files are
File requestFile = new File("D:/docs/XACML/AuthZForce/IIA001/Request.xml");
//I instantiate the pdp engine and the xacml parser
final PdpEngineConfiguration pdpEngineConf = PdpEngineConfiguration.getInstance(confLocation, null, null);
PdpEngineInoutAdapter<Request, Response> pdp = PdpEngineAdapters.newXacmlJaxbInoutAdapter(pdpEngineConf);
XmlUtils.XmlnsFilteringParser xacmlParserFactory = XacmlJaxbParsingUtils.getXacmlParserFactory(false).getInstance();
//I parse the request file
Object request = xacmlParserFactory.parse(requestFile.toURI().toURL());
if (request instanceof Request) {
//At this point I could access all request attributes or alter them
//I let the PDP evaluate the request
Response response = pdp.evaluate((Request) request);
//I check the results inside the response
for (Result result : response.getResults()) {
if (result.getDecision() == DecisionType.PERMIT) {
//it's permitted!
} else {
//denied!
}
}
}
Now, according to the literature like [1] I should not trust the attributes in the given request-xacml-file. Whenever possible, I have to check against a Attribute Provider (e.g. a Patient database) if the given attributes (e.g. patient birthdate) actually belong to the patient in order to prevent attacks.
Otherwise the attacker could make the patient younger in the Request in order to access the patient's record as a parent guardian.
Questions
Is checking Requests against Attribute Providers the task of a PDP or of another entitiy?
Did OASIS specify anything concrete about that issue? E.g. workflow or syntax of configuration files
Is there a way to make my pdp engine aware of Attribute Providers?
Should I just check the provided request on my own before Response response = pdp.evaluate((Request) request);?
I don't know for other XACML implementations, but regarding AuthzForce, Attribute Providers play the role of PIPs in official XACML terms (see the definition of PIP in XACML spec's Glossary), i.e. responsible for getting any additional attribute that is not in the XACML request context (it usually means they are not provided by the PEP originally), whenever the PDP needs it to evaluate the policy. This relates to steps 5-8 of XACML standard data-flow model (§3.1 of XACML 3.0 spec). Besides, if you read the XACML spec carefully, you notice that the actual entity calling the PIPs for the PDP is the so-called context handler. In practice, this is a matter of implementation, the context handler can take many forms. In AuthzForce, it is just a sub-component of the PDP, but you might have one on the PEP side as well which is application-specific, especially in a typical ABAC/XACML scenario where the PDP is a remote service from the PEP's perspective, and the PDP is possibly talking to many PEPs in completely different application environments.
As mentioned previously, for the workflow, look at section 3.1 Data-flow model in the XACML core spec. For the syntax, XACML core specification defines a syntax for policies, authorization decision requests and responses, nothing else at this point. You may find other things in XACML Profiles, but no such thing as configuration syntax, to my knowledge.
In AuthzForce, the PDP engine is made aware of Attribute Providers by the PDP configuration, i.e. the pdp.xml file in your example. You'll need two other files (XML catalog and schema) depending on the Attribute Provider you want to use. This is documented in the Using Attribute Providers section of AuthzForce Core's wiki.
Your code seems like test code to me since you are getting the xacml request from a local file so it seems you have full control over it, so no need to check further. More generally, it depends on the actual use case, really, no universal rule for that. Some attributes (like a subject-id resulting from authentication) are specific and only known by the PEP in its own app environment, so they are the responsibility of the PEP. Some other attributes are more likely the responsibility of the PDP (through attribute providers) if they can be resolved in a central way, such as attributes in a company's directory or other kind of identity repository.
In addition to #cdan's excellent response, here are a few more pointers:
Is checking Requests against Attribute Providers the task of a PDP or of another entitiy?
The PDP always trusts the information (attributes) it receives whether it be from the PEP or from the PIPs. As such the PDP need not verify values it received from a PEP by checking with a PIP. That's counter-productive an inefficient. If you cannot trust the PEP to send the right value, how can you trust it to enforce the right decision?
Did OASIS specify anything concrete about that issue? E.g. workflow or syntax of configuration files
No, we did not. PIP behavior is outside the scope of the XACML spec.
Is there a way to make my pdp engine aware of Attribute Providers?
Should I just check the provided request on my own before Response response = pdp.evaluate((Request) request);?
The PDP should be configured with PIPs. The PDP will use all the PIPs it can.

REST API Design - handling resource subtypes

In the API I'm putting together, I perform an Action over a set of entities. The issue I'm having is how to allow the action to vary depending on the client's preferences and supported methods.
Some of the action types will need the client to provide additional options: for instance, one of the actions will result in an email being sent, so the client needs to provide the body, recipients and so on. The server may know about more action types than the client does. For instance, a new sending method could be added, but an old client isn't going to know how to set up the options for it. As well as that, there are action types that require no options from the client, and hence all clients can use that as soon as the server is enabled for it.
As well as varying the action types based on those supported by the client/server, the entities selected also have an effect -- some entities are not valid for some action types.
At the end of this negotiation, the client (ultimately the end user) is free to choose from any of the 'no-option' action types applicable to these entities, or any of the 'need-options' action types applicable to these entities, and supported by both the client and the server.
Actions are triggered by setting a status field to committed or similar.
My thoughts so far are to provide a generic DoAction resource, with a sub-collection of the entities. A property on this resource lets you specify the 'ActionType'. There is then another sub-resource called ActionOptions and it's this where you either set the options for the particular type you're using, or leave it empty for 'no-option' types.
The issue I'm having is to decide if this is the best approach, or if something involving content types would be better, and also how to negotiate the list of available action types for the client, including the no-option types which the client can support even if it doesn't explicitly know about it.
I decided to add two read-only collections to the DoAction resource, one listing the no-option action types, and one listing the need-options types (plus could optionally include schema-like info there). These collections are based on the entities included.
The client sets their action type and the options, which is a dynamic key/value store. When the status is changed to committed, that's an opportunity to validate the resource prior to performing the action.

OData with WCF Data Services / Entity Framework

Apologies in advance, this is a long question.
(TL;DR : Does anyone have any advice on using the EF with dynamic fields exposed using WCF Data Services/OData)
I am having some conceptual problems with WCF Data Services and EF, specifically pertaining to exposing some data as an OData service.
Basically my issue is this. The database I am exposing allows users to add fields dynamically (user-defined fields) and it uses a system whereby these fields are added directly to the underlying SQL tables. Furthermore, when you want to add data to the tables you cannot use direct SQL, you have to go via an API that they provide. (it's SAP Business One, fwiw).
I have already sucessfully built a system that exposes various objects via XML and allows a client to update or add new entities into SBO by sending in XML messages, and although it works well it's not really suited to mobile apps as it's very XML-heavy and the entry point is an old-skool asmx webservice. I want to try to jazz it up for mobile development and use Odata with WCF or Web API. (I know I could change up to a WCF service, allow handing of JSON-format requests, and start returning JSON data, but it just seems like there must be a more...native...way)
Initially I had discounted the possibility of using the EF for this because a)Dynamic fields and b)the EF could only be read-only; adding/updating entities would have to be intercepted and routed to the SBO DI Server. However, I am coming back to thinking about it and am looking for some advice (negative or otherwise!) on how to approach.
What I basically want to do is this
Expose the base tables from SBO (which don't change except when they themselves issue a patch) as EF Entities, with all the usual relationy goodness. In fact I actually will not be directly exposing the tables, I will use a set of filtered SQL Views as the data sources as this ties in with various other stuff we do to allow exposing only certain data to 3rd parties.
Expose any UDFs a particular user has added as some kind of EAV sub-collection per entity.
Intercept any requests to ADD or UPDATE an object, and route these through an existing engine I have for interfacing with the SAP Data import services.
I suppose my main question is this; suppose I implement an EF entity representing a Sales Order which comprises a Header and Details collection. To each of these classes I stick in an EAV type collection of user-defined fields and values. How much work is involved in allowing the OData filtering system to work directly on the EAV colleciton (e.g for a client to be able to ask for Service/Orders/$filter=SomeUdfField eq SomeValue where this request has to be passed down into the EAV collection of the Order header entity)
Or is it possible, for example, to generate an EF Model from some kind of metadata on the fly (I don't mind how - code generation or model building library) that would mean I could just expose each entity, dyanmic fields included, as a proper EF Model? Many thanks in advance if you read this far :)
For basic crud to an existing EF context, WCF Data Services works out great. As soon as you want to add some custom functionality, as you described above it takes a bit more work.
What you described is possible, but you would need to build out your own custom data provider to handle the dynamic generation of metadata as well as custom hooks into add/update/delete.
It may be worth looking into WCF Data Services Toolkit, it's a custom provider which slaps a repository pattern over WCF Data Services for ease of use, but it does not provide the custom metadata generation.

OData / WCF Data Services metadata versioning

Is there any metadata versioning support in OData protocol and its WCF Data Services implementation?
Let us suppose that we have OData service that exposes the single Goods collection, and the Goods entity type has three properties: Key (string), Name (string) and AvailableSince(string). The service is already running, and there are some consumers that rely on this metadata schema.
Next, we want to update Goods entity type - for example replace property AvailableSince(string) by something else, or change it type from string to datetime - so we will have two versions of metadata, and consumers thatdepends on the first version of metadata will not be able to send correct requests in terms of 2nd metadata schema.
Is there any way to provide both metadata versions within the single service? If yes, then how consumer can specify metadata version in request, and how it should be processed on WCF side?
Thank to all in advance.
Short answer: NO.
Most metadata changes require either a new service or breaking existing clients.
If the existing set of clients is important, our general recommendation is to create a new service...
i.e. something like:
/v1/myservice.svc
&
/v2/myservice.svc
Alex
OData Program Manager
This recent article describes on what data changes new service version is required, and what changes do not require service update.
http://msdn.microsoft.com/en-us/library/ee473427.aspx