Using LDAP for user authentication in VSFTPD in RHEL8 - ldap

I want to set up a new ftp server using vsftpd on RHEL8, for user authentication we would like to use LDAP(389 directory server). As I understood ldap_pam.so module is deprecated in RHEL8, so I'm wondering how to connect the remote LDAP server to my vsftpd service without PAM module?

The standalone pam_ldap and libnss_ldap modules (developed by PADL) are obsolete, but they have near-drop-in replacements that come with the nslcd daemon and are also called pam_ldap and libnss_ldap. They might be found in the "nss-pam-ldapd" package.
(The old modules were removed in part because they performed LDAP requests in-process, requiring libldap and all its dependencies to be loaded into every single process that performed user lookups, which caused all kinds of problems. The newer variant of pam_ldap that comes with nslcd/nss-pam-ldapd does not have such issues.)
However, Red Hat's preferred option is probably the sssd service, which uses pam_sss and libnss_sss modules. It is somewhat optimized for MS AD and FreeIPA, but can still work with any generic LDAP (and Kerberos) server.

Here is the full setup for connection between vsftpd and ldap in rhel8:
in /etc/vsftpd/vsftpd.conf
pam_service_name=vsftpd
in /etc/pam.d/vsftpd:
#%PAM-1.0
auth required pam_sss.so domains=vsftpd
account required pam_sss.so
in /etc/sssd/sssd.conf
[sssd]
config_file_version = 2
services = nss, pam
domains = vsftpd
[domain/vsftpd]
id_provider = ldap
sudo_provider = none
auth_provider = ldap
ldap_uri = ldap://example.com
ldap_search_base = ou=example1,ou=example2

Related

SonarQube LDAP plugin deployed but not "enabled"

SQ 5.6, LDAP plugin 2.0.
I've successfully installed the LDAP plugin and restarted the SQ server. In the log (/opt/sonar/logs/sonar.log) the plugin is apparently deployed, but seemingly no attempt is made to initialize/enable it or connect to the LDAP server.
INFO web[o.s.s.p.ServerPluginRepository] Deploy plugin LDAP / 2.0 / 2910f3981167a70a201ccfae01471dfd26c794b7
.
.
INFO web[o.s.s.p.RailsAppsDeployer] Deploying app: ldap
These are the only mentions of ldap/LDAP in the log.
Relevant part of the conf/sonar.properties file:
sonar.security.realm=LDAP
ldap.url=ldap://myldap:389
ldap.user.baseDn=ou=mycompany,ou=People,dc=myurl,dc=com
I believe I've verified ldap.url and ldap.user.baseDn via JXplorer (an LDAP browser).
What really puzzles me is that I don't see anything like the following in the logs, which is what I'd expect from the SQ docs:
INFO org.sonar.INFO Security realm: LDAP ...
INFO o.s.p.l.LdapContextFactory Test LDAP connection: OK
No errors of any kind are noted in the log.
Any idea why SQ is not even apparently trying to kick off LDAP authentication on a restart?
I had the same problem. I'm running Sonarqube using docker. It did not pick up on changes when I restart the server from the Sonarqube UI. Only after restarting the docker image it could pick up the changed file.
Well, now it just started working. I don't have an answer as to why though. Maybe something changed with my LDAP server, or there was some latency that needed to be overcome. I didn't change anything on my end that I'm aware of. In any case, thanks to those that responded.

sonarqube 5.6 & LDAP 2.0 failing to authenticate

I am testing an upgrade to sonarqube 5.6 and have installed the ldap 2.0 plugin & copied the relevant configuration forward to my test 5.6 setup.
The relevant config is
sonar.security.realm=LDAP
ldap.url=ldaps://xxxx:636
ldap.bindDn=uid=xxxx,ou=xxxx,dc=xxxx,dc=xxxx
ldap.bindPassword=xxxx
ldap.user.baseDn=dc=xxxx,dc=com
ldap.user.request=(&(objectClass=person)(mail={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
I have the following set in conf/sonar.properties
sonar.log.level=DEBUG
On startup I see
2016.07.26 23:57:29 INFO web[o.s.p.l.LdapContextFactory] Test LDAP connection on ldaps://xxxx:636: OK
2016.07.26 23:57:29 INFO web[org.sonar.INFO] Security realm started
If I attempt to login, I get "Authentication failed" on the login screen.
The log file says nothing other than
2016.07.26 23:57:47 DEBUG web[http] GET / | time=67ms
2016.07.26 23:57:47 DEBUG web[http] GET / | time=187ms
2016.07.26 23:57:47 DEBUG web[http] GET /sessions/new | time=89ms
2016.07.26 23:57:53 DEBUG web[http] POST /sessions/login | time=71ms
The same configuration works fine with sonarqube 4.5.7 and ldap 1.4
Ideas welcome on how to investigate further.
You're most likely hitting known issue SONAR-7770 - Authentication fails if LDAP configuration has been forgotten during the upgrade . Note that an Upgrade Note was issued for this problem:
Most specifically, don't forget to copy the related SonarQube plugin and its related configuration in "conf/sonar.properties" (including "sonar.security.realm" and "sonar.security.localUsers" if present) into the new SonarQube instance otherwise you will be locked out after migration.
So it's important that this LDAP configuration is there even during the upgrade. If you did miss that then the easiest way forward here is to replay the upgrade with the LDAP-related configuration correctly set.
Context
Keep in mind that during an upgrade SonarQube updates the dataset and also stores new information in database (based on new features). The problem in your case would be that the upgrade was done with a partial config (which didn't set sonar.security.realm and sonar.security.localUsers) , and SonarQube couldn't figure out whether users were local or not, hence treating them as local by default. Local users are not authenticated against external authentication providers but locally, which is indeed what we're seeing in your logs (and it's obviously failing because the password lives in LDAP server, not in SonarQube database).
I fixed it by manually updating the users database table of SonarQube, asumming that all other users are managed by LDAP and just the admin is a local user:
UPDATE sonarqube_production.users SET user_local = 0, external_identity_provider = 'ldap' WHERE id != 'admin';
Little fix to Schakko query above, it should be with login not with id:
UPDATE users SET user_local = 0, external_identity_provider = 'ldap' WHERE login != 'admin';

ejabberd stateless configuration

I'm really new to XMPP and I decided to go with ejabberd. Firstly I tried to configure it on ubuntu, but I got error after error and I just switched to windows. The server is running now.
I've installed XAMPP and I tested the connection with strophe.js.
I've read some of the documentation on ejabberd and watched the tutorial videos and that guy talks about stateless configuration( use ejabberd only of messages and have my own database in which I save messages,users etc). I want to achieve that, but I don't really know where to start. I assume that I would have to post the message to my database for storing and also to the ejabberd for pushing.
Any ideas/examples/tutorials?
Edit:
2016-05-22 20:28:32.746 [error] <0.532.0>#ejabberd_sql:check_error:991 SQL query 'Q9525209' at {sql_queries,145} failed: <<"Unknown Host">>
2016-05-22 20:28:32.746 [error] <0.532.0>#ejabberd_sql:check_error:991 SQL query 'Q9525209' at {sql_queries,145} failed: <<"Unknown Host">>
2016-05-22 20:28:32.746 [error] <0.532.0>#ejabberd_auth:is_user_exists:316 The authentication module ejabberd_auth_sql returned an error
when checking user <<"admin">> in server <<"localhost">>
Error message: <<"Unknown Host">>
Configuration:
##
## MySQL server:
##
odbc_type: mysql
odbc_server: "127.0.0.1"
odbc_database: "ej_chatapp"
odbc_username: "root"
odbc_password: "password"
##
## If you want to specify the port:
odbc_port: 3306
auth_method: odbc
In my videos, Stateless configuration is mentioned in the context of ejabberd SaaS: https://ejabberd-saas.com
ejabberd SaaS provide API that can be used to point to your own backend. Those API are not available in ejabberd, only available in SaaS to ease integration with customer backends.

How to do kerberos authentication on a flink standalone installation?

I have a standalone Flink installation on top of which I want to run a streaming job that is writing data into a HDFS installation. The HDFS installation is part of a Cloudera deployment and requires Kerberos authentication in order to read and write the HDFS. Since I found no documentation on how to make Flink connect with a Kerberos-protected HDFS I had to make some educated guesses about the procedure. Here is what I did so far:
I created a keytab file for my user.
In my Flink job, I added the following code:
UserGroupInformation.loginUserFromKeytab("myusername", "/path/to/keytab");
Finally I am using a TextOutputFormatto write data to the HDFS.
When I run the job, I'm getting the following error:
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBE
ROS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1730)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1668)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1593)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.create(HadoopFileSystem.java:405)
For some odd reason, Flink seems to try SIMPLE authentication, even though I called loginUserFromKeytab. I found another similar issue on Stackoverflow (Error with Kerberos authentication when executing Flink example code on YARN cluster (Cloudera)) which had an answer explaining that:
Standalone Flink currently only supports accessing Kerberos secured HDFS if the user is authenticated on all worker nodes.
That may mean that I have to do some authentication at the OS level e.g. with kinit. Since my knowledge of Kerberos is very limited I have no idea how I would do it. Also I would like to understand how the program running after kinit actually knows which Kerberos ticket to pick from the local cache when there is no configuration whatsoever regarding this.
I'm not a Flink user, but based on what I've seen with Spark & friends, my guess is that "Authenticated on all worker nodes" means that each worker process has
a core-site.xml config available on local fs with
hadoop.security.authentication set to kerberos (among other
things)
the local dir containing core-site.xml added to the CLASSPATH so that it is found automatically by the Hadoop Configuration object [it will revert silently to default hard-coded values otherwise, duh]
implicit authentication via kinit and the default cache [TGT set globally for the Linux account, impacts all processes, duh] ## or ## implicit authentication via kinit and a "private" cache set thru KRB5CCNAME env variable (Hadoop supports only "FILE:" type) ## or ## explicit authentication via UserGroupInformation.loginUserFromKeytab() and a keytab available on the local fs
That UGI "login" method is incredibly verbose, so if it was indeed called before Flink tries to initiate the HDFS client from the Configuration, you will notice. On the other hand, if you don't see the verbose stuff, then your attempt to create a private Kerberos TGT is bypassed by Flink, and you have to find a way to bypass Flink :-/
You can also configure your stand alone cluster to handle authentication for you without additional code in your jobs.
Export HADOOP_CONF_DIR and point it to directory where core-site.xml and hdfs-site.xml is located
Add to flink-conf.yml
security.kerberos.login.use-ticket-cache: false
security.kerberos.login.keytab: <path to keytab>
security.kerberos.login.principal: <principal>
env.java.opts: -Djava.security.krb5.conf=<path to krb5 conf>
Add pre-bundled Hadoop to lib directory of your cluster https://flink.apache.org/downloads.html
The only dependencies you should need in your jobs is:
compile "org.apache.flink:flink-java:$flinkVersion"
compile "org.apache.flink:flink-clients_2.11:$flinkVersion"
compile 'org.apache.hadoop:hadoop-hdfs:$hadoopVersion'
compile 'org.apache.hadoop:hadoop-client:$hadoopVersion'
In order to access a secured HDFS or HBase installation from a standalone Flink installation, you have to do the following:
Log into the server running the JobManager, authenticate against Kerberos using kinit and start the JobManager (without logging out or switching the user in between).
Log into each server running a TaskManager, authenticate again using kinit and start the TaskManager (again, with the same user).
Log into the server from where you want to start your streaming job (often, its the same machine running the JobManager), log into Kerberos (with kinit) and start your job with /bin/flink run.
In my understanding, kinit is logging in the current user and creating a file somewhere in /tmp with some login data. The mostly static class UserGroupInformation is looking up that file with the login data when its loaded the first time. If the current user is authenticated with Kerberos, the information is used to authenticate against HDFS.

How to call Apache NMS from in a sandbox?

I'm trying to call Apache ActiveMQ NMS Version 1.6.0 from my code ('IntPub') that must run in a sandbox in a .NET 4.0 environment for security reasons. The program that creates the sandbox makes my code 'partially trusted' and therefore 'security-transparent' which seems to mean that it can't create a ConnectionFactory (see error log below) because NMS seems to be 'security-critical'. Here's the code that's causing this error:
connecturi = new Uri("tcp://my.server.com:61616");
var connectionFactory = new ConnectionFactory(connecturi);
I also tried this instead with similar results:
connecturi = new Uri("activemq:tcp://my.server.com:61616");
var connectionFactory = NMSConnectionFactory.CreateConnectionFactory(connecturi);
Since I can't change the security level of my assembly (the sandbox prevents it) is there a way to make NMS run as 'safe-critical' so it can be called by 'security-transparent' code? Would I have to recompile it to do so, or does NMS do some operation that would never be considered 'safe-critical?
I appreciate any help or suggestions...
Assembly 'IntPub, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6fa620743b8dc60a' is partially trusted, which causes the CLR to make it entirely security transparent regardless of any transparency annotations in the assembly itself. In order to access security critical code, this assembly must be fully trusted.Detail:
<OrganizationServiceFault xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
<ErrorCode>-2147220956</ErrorCode>
<ErrorDetails xmlns:d2p1="http://schemas.datacontract.org/2004/07/System.Collections.Generic" />
<Message>Unexpected exception from plug-in (Execute): Test.Client: System.MethodAccessException: Attempt by security transparent method 'Test.Client.Execute(System.IServiceProvider)' to access security critical method 'Apache.NMS.ActiveMQ.ConnectionFactory..ctor(System.Uri)' failed.
From the error message attributes, it looks like you're running a Dynamics CRM 2011 plugin in sandbox mode, which has some very specific rules about what you can and can't do. In particular, you're only allowed to make network connections via HTTP and HTTPS, so attempting raw TCP sockets will definitely fail.
Take a look at this MSDN page on Plug-in Isolation, Trusts, and Statistics. It looks like there may be a way to relax the network restrictions by modifying a system registry entry to include tcp, etc, in the regex value. Below is an excerpt from the page. Note: I have not done this myself, so can't say for sure it'll work.
Sandboxed plug-ins and custom workflow activities can access the
network through the HTTP and HTTPS protocols. This capability provides
support for accessing popular web resources like social sites, news
feeds, web services, and more. The following web access restrictions
apply to this sandbox capability.
Only the HTTP and HTTPS protocols are allowed.
Access to localhost (loopback) is not permitted.
IP addresses cannot be used. You must use a named web address that requires DNS name resolution.
Anonymous authentication is supported and recommended. There is no provision for prompting the logged on user for credentials or saving those credentials.
These default web access restrictions are defined in a registry key on
the server that is running the Microsoft.Crm.Sandbox.HostService.exe
process. The value of the registry key can be changed by the System
Administrator according to business and security needs. The registry
key path on the server is:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\SandboxWorkerOutboundUriPattern
The key value is a regular expression string that defines the web access restrictions.
The default key value is:
"^http[s]?://(?!((localhost[:/])|([.])|([0-9]+[:/])|(0x[0-9a-f]+[:/])|(((([0-9]+)|(0x[0-9A-F]+)).){3}(([0-9]+)|(0x[0-9A-F]+))[:/]))).+";*
By changing this registry key value, you can change the web access for sandboxed plug-ins.