Is it possible to synchronize the ldap data in gluu with an rdbms? - gluu

I've been reading a bit about gluu. It uses an ldap or a couchbase backend. I need some of the user data (at least the immutable user identifier) to be replicated to an RDBMS (let's say postgresql).
Is it possible? Also, am I out to lunch trying to achieve this? Cas and keycloak offer the option to hook the product to an rdbms, so I would say no - but it might be an anti-pattern.
Thanks

It's possible to sync LDAP and SQL/others using LSC
You can also post your question in https://support.gluu.org to talk directly to the team for further details.

Related

Implementing LDAP (preferably OpenLDAP) 'tiered' server

I'm trying to set up an LDAP server which has a local database of some users, but will refer the request to another 'parent' server if the record is not found on it's own database. The records would not have a different suffix, just that the userid would be different. I have read lots about referrals, chaining, master/slave, producer/consumer, etc. However, I cannot work out which is the right solution for me as there is a lot of differing confusing documentation. I am only a user of the 'parent' server, so would prefer not to have to change any settings on it. I would like to use OpenLDAP or an equivalent FOSS solution.
Thanks!

Azure multi tenant security - Azure Federated SQL, certs etc

I'm trying to get some advice on how to approach a security architecture on Azure.
Background:
We are looking at building a multi-tenant app on Azure that needs to be extremely secure (personally sensitive data). The app will be accessed by standard browsers and mobile devices.
Security access types:
We have three types of users / access types...
1 - plain old user/password over https is fine, accessing both general, non private SQL plus hosted files
2 - user/pass over https, but need authentication of users via certificates that will be installed on user machines/devices. This level of user will need access to sensitive data which should be encrypted at rest both in database, and also any uploaded files.
3 - same as (2) but with the addition of some two factor authentication (we have used YubiKey for other things - might look towards a phone OTP offering as well)
Most users will only have access to their own tenant databases, however we have "account manager" type users that need access to selected tenant data, therefore we expect that they will need either a copy of one certificate per tenant they serve, or we will have to use some kind of master certificate.
Database type:
From a multi-tenant point of view it seems Azure Federated SQL is a good way to go because (a) we simply write one app with "TenentID" key in each table, and after login, set a global filter that handles the isolate for us (b) we understand that Azure federated SQL actually in the background maintains separate SQL database instances per tenant.(Ref: http://msmvps.com/blogs/nunogodinho/archive/2012/08/11/tips-amp-tricks-to-build-multi-tenant-databases-with-sql-databases.aspx)
Can anyone point to any links or give advice in relation to the approach needed to setup and manage file shares, encryption of SQL and file data at rest, authentication of users etc. (automated management on new user signup pref).
I can't really help on the certificates, but you will indeed need some "master certificate". If you are planning on using Azure website, you can't use your own certificates currently.
Concerning the database setup. SAAS applications build on trust, so you NEVER (EVER) want to be showing or editing the data of using to other users.
Therefore I strongly suggest that you don't use the TenantID for each table. This would leave still the possiblity of an attack by a malicious user or an error by some developer.
The only way to get around these risks are
extensive testing
physical different tables to store each tenant data.
Personally I believe that even with very extensive+automated testing you can't have 100% code coverage against malicious users. I guess I am not alone.
The only way out IMHO is physical different tables. Let's look at the options:
different server: valid, but pretty expensive in azure
different database: valid, less management overhead but same objection as the previous option - expensive if you have a lot of tenants
different schema's: the solution. Think about it...
you only have to manage users and there default schema's
you can back-up schema's using powershell
you can move schema's to other databases with some work
You can still digg into SQL federation if you need to.
the major drawback is that you will need to support database upgrades for each tenant.
Have you read on azure.com any articles about multi-tenancy? http://msdn.microsoft.com/en-us/library/windowsazure/hh689716.aspx

Mongodb autosharding vs. authentication

Long time lurker, first time poster, please bear with me.
I'm trying to set up a sharded, secure Mongodb environment. I would like to make use of Mongo's autosharding capability, since I'm sort of new to databases and on a tight schedule.
It seems that autosharding only applies to individual collections (tables), but I don't want users to have access to the entire collection. Further, mongoDB only allows authentication into databases, so once authenticated, a user can see 1) every collection in the db and 2) all data within each collection. So, as far as I can tell, I can either have autosharding and no authentication, or manual sharding and authentication.
I would like the best of both worlds, that is: autosharding and authentication. Is this possible? If not, how should I go about manual sharding in MongoDB?
A simplified use case of this system: collection 'Users' has data on every user. I want to authenticate user X so that X can only see X's data in the User's collection. And Users is distributed across multiple servers partitioned (sharded) by user_name.
MongoDb doesn't have authentication like traditional SQL databases. In fact if you read the manual its recommended that you use a secured environment instead of using authentication. Any access control to your data would be implemented within your application.
Even with traditional SQL, access isnt control by row. Thats usually something implemented at the application level based on some sort of key within the data.

couchdb read authentication

how can i handle read authentication in couchdb? i know roles can be defined in seperate databases but i want to implement read authentication on document level. i am thinking about using node.js but it does not seem an elegant solution because couchdb also has a http server and i dont want to add one more (or another application server like ruby or python). is there anyone working on this?
Thanks.
In the recent O'Reilly web cast on CouchDB, J. Chris Anderson mentioned that read authentication was best handled by a combination of partial replication and multiple databases per reader group. Each database would contain only the documents pertaining to that specific group.
It makes the most sense when you think of each readers CouchDB as a filtered instance of an authority database.
That's basically the correct answer. What I'd add is that document-level read control is hard to get right, especially in the presence of views. Filtering map rows at read-time is doable, but not very IO efficient. Generate reduction values based on filtered map rows, however, is prohibitively expensive.
For those reasons we encourage you to operate something like a database per access group, and make the entire database readable by all users.

How to divide responsibility between LDAP and RDBMS

I'm a lead developer on a project which is building web applications for my companies SaaS offering. We are currently using LDAP to store user data such as IDs, passwords, contanct details, preferences and other user specific data.
One of the applications we are building is a reporting service that will both collect and present management information to our end users. Obviously this service will require a RDBMS but it will also need to access user data stored in LDAP.
As I see it we have a two basic implementation options:
Duplicate user data in both LDAP and the RDBMS.
Have the reporting service access LDAP whenever it needs user data.
Although duplicating data (and implementing the mechanisms to make this happen) as suggested in option 1 seems the wrong way to go, my gut feeling is that option 2 would not perform well enough (how do you 'join' LDAP data to RDBMS data as efficiently as a pure RDBMS implementation?).
I did find a related question but I'm still unsure which approach to take. I'd be interested in seeing what people thought of either option or perhaps other options.
Why would you feel that duplicating data would be the wrong way to go? Reporting tools (web based and otherwise) are mostly built around RDBMS's, so any mix'n'match will introduce unnecessary complexities. Reports are likely to need to be changed fairly frequently (from experience), so you want them to be as simple as possible. The data you store about users is unlikely to change its format very often, so once you have your import function working, you won't need to touch it again.
The only obstacle I can see is latency: how do you ensure that your RDBMS copy is up to date? You might need to ensure that your updating code writes to both destinations. Personally, also, I wouldn't necessarily use LDAP for application specific personal preferences: LDAP can't handle transactions, so what happens when data is updated from several directions? (Transactionality is of course also a problem with letting updaters write to both stores...) I'd rather let the RDBMS be the master for most data, and let LDAP worry only about identity, credentials and entitlements, which are rarely changed and only for one set of purposes. For myself, LDAP's ability to deal with hierarchical data isn't all that great a selling point.
Data duplication is not always a bad thing, especially when the usage scenarios are different enough.