is opendj replication can work on base DN which is not a backend? - replication

I am not able enable replication on a base DN which is not a backend root.
ou=networkelements,o=networkdata
here o=networkdata is a backend ID.

OpenDJ replication can only be configured at the top of a backend BaseDN (although a backend may have multiple distinct baseDNs).
If you have a directory tree with o=networkdata, and you only want to replicate a subpart of it (ou=networkelements,o=networkdata), you need to have that subtree be in a separate backend with a baseDN of "ou=networkelements,o=networkdata".

Related

Can openldap sync Directory Information Tree data using syncrepl from opendj as provider?

My scenario is that we have a centralized Opendj in cloud and we want to sync(pull) the Directory data from this Opendj to an Openldap instance running in same cloud.
I am trying to achieve this by using syncrepl by configuring Openldap slapd.conf file. In the file
provider ldap://opendjendpoint.my.org:389 is Opendj
syncrepl rid=1
provider=ldap://opendjendpoint.my.org:389
type=refreshOnly
interval=00:00:05:00"
searchbase="o=my.org,c=us"
filter="(objectClass=*)"
scope=sub
attrs="*,+"
schemachecking=off
bindmethod=simple
binddn="cn=syncuser,o=my.org,c=us"
credentials=somepass
Question is though opendj runs an LDAP server, but using syncrepl in openldap can I connect to it.
My thought is that openldap can connect only with RFC4533 implementations(LDAP sync Protocol) and opendj does not implement it. Can somebody provide input on this?
As you've noticed OpenDJ doesn't implement RFC4533 which is an experimental RFC. But when replication is enabled, all changes can be retrieved via LDAP (subject to access controls), from the cn=changelog suffix. Synchronization tools such as LSC Project can consume these changes and replay them against other LDAP servers.

How to secure HDFS on DC/OS without Enterprise

I'm trying to secure HDFS cluster on open source DC/OS but it seems it's not an easy thing.
The problem I see in HDFS is the fact that it uses username of current system user so without any form of authentication anyone can just create user with certain username and get superuser permissions on cluster.
So I need any form of authentication. IP auth would be fine(clients with certain IPs can only connect to HDFS) but I couldn't find if there's an option to enable it.
Creating Kerberos for HDFS is not an option because running another service just to run another service to run another service etc. will only give tons of work.
If enabling any form of viable security is impossible, is there any other DC/OS HDFS-like service I can use? I need some HA storage to fetch config files and sometimes jars from Artifact Uris to run services. I also need a place to store parquet files from spark streaming.
Version of DC/OS HDFS is 2.6.x.
Unfortunately it seems that Kerberos is the only real form of authentication in HDFS. Without this, HDFS will trust every user.

HAProxy with https and kerberos

I'm trying to implement a reverse proxy in our system, for a micro-services architecture.
The proxy server is HAProxy that works with SSL Termination and needs to proxy requests to a backend server with Https and Kerberos authentication.
I succeeded to terminate the ssl on the proxy server and pass the request to the https server (I need the termination in order to route requests by their body, to specific backend services) , but failing to authenticate with kerberos on the backend server.
Is it possible to implement Kerberos auth on the proxy server and then pass the TGT to the different backend services?
I have successfully done this and it took some work.
At the time I was using HDP so I used ambari to setup a hive server on the HAproxy node. (This was done solely for the purpose of having Ambari manage the kerberos principle. The hive server itself never ran)
Then I merged the keytab for my hive server (on the proxy) with my Hive server keytabs so that the principle could be used on the hive servers. I think I also allowed it as a principle to work with hive. I'm sure there is another path that would allow you to use delegation but this was the past of least resistance and made it so hive managed mostly managed the keytab. I did have to re-merge the keytab when they where regenerated but it wasn't as bad as manually managing keytabs.

Difference between Agent User ID and user/ password while configuring replication agent AEM

What is the difference between Agent User ID (Settings tab) and User/Password (Transport tab)? Please share the scenarios of both two when configuring the replicating agents in AEM.
This is well documented in Adobe's documentation here
The context that is missing is to understand the how ACLs work, each user/group has certain privileges/rights; which outside normal CRUD operations include Read ACL, Edit ACL and Replicate. You can read about them here
Now coming to your question, a replication agent has host configuration (the system on which it is setup) and target configuration (the system it connects to). Agent User ID is used for the host system while User/Password on transport tab is for the target system.
For a replication agent on author, the user used in Agent User Id must have read and replicate rights on all path that need to be processed where as user specified in User/Password on transport tab must have create/write access to replicate the content on Publish instance.

Using ldap locally to share login info with webapps - Do I need Kerberos too?

So I'm setting up a dedicated server using Debian 5 Lenny. I will be using some Atlassian Tools (JIRA, Confluence, Bamboo, and Fisheye). I want to use a local LDAP server to store information for the users that will be accessing these software titles, so that they can use one set of credentials to log in.
I also want webmail users to be configured using LDAP.
However, this is a small operation. Three people. That's why all of the software, including the ldap server, will all be on the same machine.
That said, is it safe to use LDAP to store user credentials (including passwords) in LDAP without using Kerberos? I'm confused as to when Kerberos should be used.
Hypothetically, let's say I had two servers on a subnet. Server A received requests from the outside world, for atlassian tools. Server a communicates to ldap server (internally) on server b. In that case, would I use kerberos?
When do I use Kerberos? When do I not?
I am not setting anything like "Active Directory" up. No Samba either. Users do not need to login to a domain (with access to files on the domain), they just need to login to webapps. But if I was doing LDAP on it's own dedicated machine, then I might want Kerberos?
:confuzzled: :(
-Sam
The simplest possible answer is yes, it is possible to store user names, user ids, and passwords without using Kerberos, and in fact directory services accessed via LDAP are an excellent tool for storing this sort of authentication and authorization information.
Update:
In my opinion, if you do choose an open source server, you will find OpenDS to be superior to OpenLDAP or Apache.
Basically, if you have Kerberos, you do not need any directory server. If you aren't in a corporate environment and are looking for an identity management store, you should definitively go for a directory server like OpenLDAP or Apache Directory. Kerberos require running a correctly set up DNS and NTP server. This might be way to much. Even if you do, those lazy morons from Atlassian still did not implement Kerberos support into their products. You can't even go with that.
I just noticed that there are only three of you, maybe a simple database setup with MySQL would suffice instead of running a full-blown directory server?