My approach to develop a multi-tenant i.e. Shared Database, Separate Schemas ideally on PostgreSQL. One database for all tenants, but one schema per tenant similar to django tenant schemas. Is there a guide or addon to achieve this on websauna?
Here is what there is and there isn't
Already there
SQLAlchemy supports PostgreSQL tenancy. You can set this by overriding the database session factory when subclassing websauna.system.Initializer for your application. See configure_database that leads you to the path that allows to override create_dbsession. Your database session factory would look the properties of the HTTP request (domain) and configure session to point to the corresponding schema accordingly.
Add-on architecture that could provide a pluggable websauna.tenant addon
Theming: CSS and other assets (logo) can be customized by providing a site/base.html template that is a tenant aware. E.g. it injects css.html in <head> with the ability to define CSS filenames from the database. You would override the default site/base.html in the websauna.tenant addon.
Missing
Alembic migration supports PSQL schemas, but not sure how complete this support is
There isn't Tenant model e.g. to track the subscriber billing status
Related
Is it possible to bootstrap UserGroups and Policies with a file based provider?
Currently we use org.apache.nifi.authorization.FileUserGroupProvider to bootstrap an Initial User Identity and org.apache.nifi.authorization.FileAccessPolicyProvider to bootstrap the Initial Admin Identity when setting up a NiFi instance.
I inspected the code of the FileUserGroupProvider as well as the Authorizers.xml Setup in the Administration Guide and I couldn't find anything about bootstrapping UserGroups. I guess the same goes with bootstrapping AccessPolicies using the FileAccessPolicyProvider. I know that it is possible using LDAP, but we don't use that right now.
I already found a similar question her on StackOverflow but the solution is not satisfactory, as we don't want to use the nifi-api for that task, if not absolutely necessary. So what I would do is writing a new file based UserGroupProvider and AccessPolicyProvider to fulfill that task.
Is that the only possibility?
Would I use the CompositeUserGroupProvider or the CompositeConfigurableUserGroupProvider for that, so instead of re-implementing the functionality of the FileUserGroupProvider and adding my custom implementation could I use this to combine the functionality?
Meaning something like this:
<userGroupProvider>
<identifier>composite-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeUserGroupProvider</class>
<property name="User Group Provider 1">org.apache.nifi.authorization.FileUserGroupProvider</property>
<property name="User Group Provider 2">MyFileUserGroupProvider</property>
</userGroupProvider>
How would the configuration look like in the authorizers.xml file?
If my assumption about how to use a CompositeProvider is correct, is there something similar for bootstrapping Policies?
If I understand correctly, you want to automate setting users, groups, and policies to fixed, predefined values.
I would recommend using the FileUserGroupProvider and the FileAccessPolicyProvider, as those both give you the ability to configure users, groups, and policies directly in NiFi itself. You should not have to create custom implementations of a UserGroupProvider or AccessPolicyProvider unless you need to customize the functionality beyond what the included filed-based providers can supply.
You said you did not want to use the nifi-api, by which I assume you mean the HTTP REST API. (I am not trying to be pedantic, there is actually a library called nifi-api that is a collection of Java interfaces for nifi developers to use in writing extensions.) The REST APi is a good option I would normally recommend, as there are guarantees on backwards compatibility on for NiFI 1.x going forward, but it is not the only way to achieve what you want to do.
You can create users.xml and authorizations.xml files manually (or scripted), outside of NiFi, and you just have to configure the FileUserGroupProvider and AccessUserGroupProvider to use those files (or copy them to the default location for those files in the conf directory). On startup, NiFi reads the contents of these files into memory to create users, groups, and access policies. The Initial User and Initial Admin properties are only used to automate populating these files when they are absent or empty, so if you provide your own copies of these files, they will be used.
The structure of these XML files is fairly simple to create. You can use a NiFi instance to create users, groups, and policies through the UI, and see what is written to these files. You can then create them however you like: the NiFi UI, by hand, or scripted from another source file. Once you have the files created, you can do the "bootstrapping" part by placing them in the NiFi conf dir and (re)starting it. NiFi does not regenerate or modify these files unless users, groups, and policies are modified in the UI.
The only downside with these approach is that these files are not guaranteed to have a stable schema going forward. So new fields could be added or changed over time. That said, they have been stable for the last several versions of NiFi.
We have an Identity Provider User registry, and a SOAP Web Service for applications to read/write user profiles. Now we plan to add a SCIM interface as well.
We find that Core User schema covers the basic set of attributes, however our existing system has a different naming convention for the same attributes.
For example, say USERTELEPHONENUMBER, USERSTREETADDR1 and so on.
Considering large number of applications already using this naming convention, we would like to continue the same with SCIM 2.0.
Given that we can extend the Core User schema,
1) Can we opt not using any attributes from Core schema ? If the payload includes these attributes, can we simply ignore them on the server side, and process only custom schema attributes ?
An example User document -
{
"schemas": [ "urn:scim:schemas:core:2.0:User",
urn:scim:schemas:extension:customattrs:2.0:User"],
"id": "2819c223-7f76-453a-919d-413861904646",
"urn:scim:schemas:extension:customattrs:2.0:User": {
"USERFIRSTNAME": "fname",
"USERLASTNAME": "lname",
"USERTELEPHONENUMBER": "1231231234
}
}
2) We can define a new resource itself and define a new core schema.
Which of these options would be a cleaner way ?
If you don't plan to use the core schemas, why use SCIM at all?
SCIM strongly discourage having multiple attributes that mean the same thing.
I would suggest that you create a mapping between your attributes and the SCIM core (and enterprise extension) attributes. If there are anything that does not map to the those 2 schemas, you should create an extension.
I think what you need is a mapping between scim core schema attributes and your existing system attributes. As you have said both the scim core schema and your existing system attributes share the same meaning, you should not redefine those attributes in the extension. That is strongly discouraged by scim specification.(https://www.rfc-editor.org/rfc/rfc7643#section-3.3)
Schema extensions SHOULD avoid redefining any attributes defined in
this specification and SHOULD follow conventions defined in this
specification.
However if you a have additional attributes in your existing system, you may define them in the extension.
If you have a decoupled scim implementation like WSO2 Charon (https://docs.wso2.com/display/IS450/Implementing+SCIM+with+Charon) I suggest you to have separate layer underneath the scim implementation layer to do the necessary mapping of the attributes before they are used in any business logic. But that is basically depends on your implementation.
Mean.io comes with a built in user model within the user package. What is the best practice for extending that user model if I want to attach additional data to it?
My experience with Django had me creating a "profile" that had a foreign key pointing towards the user object it belonged to. I like this approach because I don't touch the user package that way. But is this a best practice? If this is, how can I ensure the creation of a profile doc at the creation of a user doc? If not, what is?
I'm not sure qm69's solution would be the best for future compatibility with mean. In the mean.io documentation http://learn.mean.io/ it states the developer shouldn't alter any core packages, including the user package.
The mean.io pattern is to implement any and all extensions as a custom package. And override default views using the $viewPathProvider.override method.
Secondly the User package is fundamentally a security/authentication feature and not a profile implementation which regularly receives updates. Altering this will most likely break future fixes and risk introducing security bugs.
My advice would be to implement a profile using means package system and add a service dependency for the User service. I've done this in previous projects and it works well.
To implement a profile package, follow the below steps:
1) Create a custom package called profile using mean package profile.
2) Implement model/view/control for all profile requirements in the custom package. DONT ALTER ANYTHING IN THE USER package.
2) Use dependency injection to include the Global service service. This will give you access to Global.user data so you most likely don't even need to use the User services.
3) Override any User views using the $override method mentioned in the above doco.
Hope this helps ;)
I came across many different models for the Access Control in a system. When implementing an Access Control model in any system, we usually hard code the rules/rights in the database(considering an RDBMS) by creating separate tables for the Access Control. Also, these rules/rights can be stored in an XML database.
I would like to know what is the difference between storing the rules on RDBMS and on an XML database? Also, when should we use XACML for implementing an Access Control model in a system? I mean, how one can decide whether one should hardcode the rules/rights in the database or one should use XACML policy language?
Thanks.
Disclaimer: I work for Axiomatics, a vendor implementation of XACML
Storing authorization logic if you go your own way could be done either in the RDBMS or in an XML database. It doesn't matter. I doubt that XML brings you any added capabilities.
Now, if you want an authorization system that can cater for RDBMS systems and other types of applications too (CRM, .NET, Java...) then you want to use a solution that is agnostic of the type of application it protects. That's the goal of XACML, the eXtensible Access Control Markup Language.
XACML provides attribute-based, policy-based access control (ABAC and PBAC). This gives you the ability to write extremely expressive authorization policies and managed them centrally in a single repository. A central authorization engine (called Policy Decision Point or PDP) will then serve decisions to your different applications.
As Bell points out, the minimum set of attributes you will need is typically attributes about the user (Subject), the resource, and the action. XACML also lets you add environment attributes. This means you can write the following type of policy:
Doctors can view the medical records of patients they are assigned to.
Doctors describes the user / subject
view describes the action
medical records describes the targeted resource
of patients describes the targeted resource too. It's metadata about the resource
they are assigned to is an interesting case. It's an attribute that defines the relationship between the doctor and the patient. In ABAC, this gets implemented as doctor.id==patient.assignedDoctorId. This is one of the key benefits of using XACML.
Benefits of XACML include:
- the ability to externalize the authorization logic as mentioned by Bell
- the ability to update authorization logic without going through a development/deployment lifecycle
- the ability to have fine-grained authorization implemented the same way for many different applications
- the ability to have visibility and audits on the authorization logic
HTH
The two are not mutually exclusive.
An XACML policy describes how to translate a set of attributes about an attempted action into a permitted/denied decision. At minimum the attributes would be who the user is (Subject), what they are trying to do (Action) and what they are trying to do it to (Object). Information such as time, the source of the request and many others can be added.
The attributes of the user and the object will still have to be stored in the database. If you are grouping users or objects to simplify administration or to simplify defining access control rules then you're going to have to manage all of that in the database to. All that data will then need to be passed to the XACML Policy Decision Point to return the permit/deny decision.
The advantage of using XACML to define these rules, rather than writing your own decision logic for the rules defined in the database, is that the assessment of the rules can be handed off to an external application. Using a mature, tested XACML implementation (there are open source options) will avoid you making any mistakes in building the checks into your own code.
Hardcoding the policies in your code is a very bad practice I think. In that case you mix the business logic of your resources and the permission check of the access control system. XACML is a big step in the right direction because you can create a fully automatic access control system if you store your rules in a separated place (not hardcoded in the business logic).
Btw you can store that rules in the database too. For example (fictive programming language):
hardcoded RBAC:
#xml
role 1 editor
#/articles
ArticleController
#GET /
readAll () {
if (session.notLoggedIn())
throw 403;
if (session.hasRole("editor"))
return articleModel.readAll();
else
return articleModel.readAllByUserId(session.getUserId());
}
not hardcoded ABAC:
#db
role 1 editor
policy 1 read every article
constraints
endpoint GET /articles
permissions
resource
projections full, owner
role 2 regular user
policy 2 read own articles
constraints
endpoint GET /articles
logged in
permissions
resource
projections owner
#/articles
ArticleController
#GET /
readAll () {
if (session.hasProjection(full))
return articleModel.readAll();
else if (session.hasProjection(owner))
return articleModel.readAllByUserId(session.getUserId());
}
As you see the non hardcoded code is much more clear than the hardcoded because of the code separation.
The XACML is a standard (which knows 10 times more than the example above) so you don't have to learn a new access control system by every project, and you don't have to implement XACML in every language, because others have already done it if you are lucky...
I am working on a content platform that should provide semantic features such as querying with SPARQL and providing rdf documents for the contained content.
I would be very thankful for some
clarification on the following
questions:
Did I get that right, that an entity
hub can connect several semantic
stores to a single point of access?
And if not, what is the difference
between a semantic store and an
entity hub?
What frameworks would you use to
store content documents as well as
their semantic annotation?
It is important for the solution to be able to later on retrieve the document (html page / docs such as pdf, doc,...) and their annotated version.
Thanks in advance,
Chris
The only Entityhub term that I know is belong to Apache Stanbol project. And here is a paragraph from the original documentation explaining what Entityhub does:
The Entityhub provides two main services. The Entityhub provides the
connection to external linked open data sites as well as using indexes
of them locally. Its services allow to manage a network of sites to
consume entity information and to manage entities locally.
Entityhub documentation:
http://incubator.apache.org/stanbol/docs/trunk/entityhub.html
Enhancer component of Apache Stanbol provides extracting external entities related with the submitted content using the linked open data sites managed by Entityhub. These enhancements of contents are formed as RDF data. Then, it is also possible to store those content items in Apache Stanbol and run SPARQL queries on top of RDF enhancements. Contenthub component of Apache Stanbol also provides faceted search functionality over the submitted content items.
Documentation of Apache Stanbol:
http://incubator.apache.org/stanbol/docs/trunk/
Access to running demos:
http://dev.iks-project.eu/
You can also ask your further questions to stanbol-dev AT incubator.apache.org.
Alternative suggestion...
Drupal 7 has in-built RDFa support for annotation and is more of a general purpose CMS than Semantic MediaWiki
In more detail...
I'm not really sure what you mean by entity hub, where are you getting that definition from or what do you mean by it?
Yes one can easily write a system that connects to multiple semantic stores, given the context of your question I assume you are referring to RDF Triple Stores?
Any decent CMS should be assigning documents some form of unique/persistent ID to documents so even if the system you go with does not support semantic annotation natively you could build your own extension for this. The extension would simply store annotations against the documents ID in whatever storage layer you chose (I'd assume a Triple Store would be appropriate) and then you can build appropriate query and presentation layers for querying and viewing this data as required.
http://semantic-mediawiki.org/wiki/Semantic_MediaWiki
Apache Stanbol
Do you want to implement a traditional CMS extended with some Semantic capabilities, or do you want to build a Semantic CMS? It could look the same, but actually both a two completely opposite approaches.
It is important for the solution to be able to later on retrieve the document (html page / docs such as pdf, doc,...) and their annotated version.
You can integrate Apache Stanbol with a JCR/CMIS compliant CMS like Alfresco. To get custom annotations, I suggest creating your own custom enhancement engine (maven archetype) based on your domain and adding it to the enhancement engine chain.
https://stanbol.apache.org/docs/trunk/components/enhancer/
One this is done, you can use the REST API endpoints provided by Stanbol to retrieve the results in RDF/Turtle format.