sssd Error: Could not start TLS encryption. (unknown error code) - ldap

I am trying to configure Linux machine authentication with Google secure LDAP, adding the steps below that I have done
Added the LDAP client with below permission:
Access permission: Entire Domain
Read user information: Entire Domain
Read group information: ON
Installed SSSd in my Ubuntu box(which is running in Azure)
sudo apt install -y sssd sssd-tools
My sssd.conf file
[sssd]
debug_level = 7
services = nss, pam
domains = mydomain.com
[pam]
debug_level = 7
[nss]
debug_level = 7
[domain/mydomain.com]
debug_level = 7
cache_credentials = true
ldap_id_use_start_tls = true
ldap_tls_cacertdir = /home/ubuntu/ssl_Linux
ldap_tls_cacert = /home/ubuntu/ssl_Linux/gldap.crt
ldap_tls_cert = /home/ubuntu/ssl_Linux/gldap.crt
ldap_tls_key = /home/ubuntu/ssl_Linux/gldap.key
ldap_uri = ldaps://ldap.google.com:636
ldap_search_base = ou=Users,dc=mydomain,dc=com
ldap_group_name = uniqueMember
id_provider = ldap
auth_provider = ldap
ldap_schema = rfc2307bis
ldap_user_uuid = entryUUID
ldap_groups_use_matching_rule_in_chain = true
ldap_initgroups_use_matching_rule_in_chain = true
enumerate = false
Here I'm able to start the SSSD service bt getting the below error
Nov 15 09:14:54 myserver systemd[1]: Started System Security Services Daemon.
Nov 15 09:14:55 myserver sssd[be[67530]: Could not start TLS encryption. (unknown error code)
Nov 15 09:16:11 myserver sssd[be[67530]: Could not start TLS encryption. (unknown error code)
Nov 15 09:16:11 myserver sssd[be[67530]: Backend is offline
Nov 15 09:17:19 myserver sssd[be[67530]: Could not start TLS encryption. (unknown error code)
Nov 15 09:19:48 myserver sssd[be[67530]: Could not start TLS encryption. (unknown error code)
Nov 15 09:24:02 myserver sssd[be[67530]: Could not start TLS encryption. (unknown error code)
FYI: I'm able to successfully authenticate with the google secure LDAP using below command
LDAPTLS_CERT=mycrt.crt LDAPTLS_KEY=mykey.key ldapsearch -H ldaps://ldap.google.com:636 -b "ou=Users,dc=mydomain,dc=com" -D "my.user#mydomain.com" "(uid=my.user)" -W
Refrance: https://helpcenter.itopia.com/en/articles/2394004-configuring-google-cloud-identity-ldap-on-ubuntu-16-04-for-user-logins
Please help me on this,
Thanks :)

I had same issue.
adding ldap_tls_cipher_suite = NORMAL:!VERS-TLS1.3 to sssd.conf file worked for me. I am on Ubuntu 20.04.5 LTS

I had tried the same document with the new Virtual-Machine, It works fine for me.
Just need to make sure after configuring google LDAP client in http://admin.google.com/ portal may take up to 24 hours to take effect.
Thanks

Related

Command " /asadmin list-applications " failed in solaris

It appears that server [localhost:4848] does not accept secure connections. Retry with --secure=false.
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateExpiredException: NotAfter: Thu Jul 21 05:29:59 IST 2022
Command list-applications failed.
I want to verify that is there any certificate is installed or not , If installed then how to resolved the problem . how to check the installed applications from solaris.How to check the ssl expiries in solaris system.

TLS-Encrypted Connection with RabbitMQ Using pika

I am finding it impossible to set up an encrypted connection with a RabbitMQ broker using python's pika library on the client side. My starting point was the pika tutorial example here but I cannot make it work. I have proceeded as follows.
(1) The RabbitMQ configuration file was:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = false
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
(2) The rabbitmq-auth-mechanism-ssl plugin was enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling was confirmed by checking the enable status through: rabbitmq-plugins list.
(3) The correctness of the TLS certificates was verified by using openssl tools as described here.
(4) The client-side program to set up the connection was:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(
cafile="/Xyz/sampleNodeCert/tms.crt")
context.load_cert_chain("/Xyz/sampleNodeCert/node.crt",
"/Xyz/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, '127.0.0.1')
conn_params = pika.ConnectionParameters(host='127.0.0.1',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials())
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
(5) The client-side program failed with the following error message:
pika.exceptions.ProbableAuthenticationError: ConnectionClosedByBroker: (403) 'ACCESS_REFUSED - Login was refused using authentication mechanism EXTERNAL. For details see the broker logfile.'
(6) The log message in the RabbitMQ broker was:
2019-10-15 20:17:46.028 [info] <0.642.0> accepting AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
2019-10-15 20:17:46.032 [error] <0.642.0> Error on AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671, state: starting):
EXTERNAL login refused: user 'CN=www.node.com,O=Node GmbH,L=NodeTown,ST=NodeProvince,C=DE' - invalid credentials
2019-10-15 20:17:46.043 [info] <0.642.0> closing AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
(7) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
Questions: Does anyone have any idea as to why my test fails? Where can I find a complete working example of setting up an encrypted connection to RabbitMQ using pika?
NB: I am familiar with this post but none of the tips in the post helped me.
After studying the link provided above by Luke Bakken, I am now in a position to answer my own question. The main change with respect to my original example is that I configure the RabbitMQ broker with a passwordless user which has the same name as the CN field of the TLS certificate on both the server and the client side. To illustrate, below, I go through my example again in detail:
(1) The RabbitMQ configuration file is:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_cert_login_from = common_name
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = EXTERNAL
auth_mechanisms.2 = PLAIN
auth_mechanisms.3 = AMQPLAIN
Note that, with the ssl_cert_login_from configuration option, I am asking for the username of the RabbitMQ account to be taken from the "common name" (CN) field of the TLS certificate.
(2) The rabbitmq-auth-mechanism-ssl plugin is enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling can be confirmed by checking the enable status through command: rabbitmq-plugins list.
(3) The signed TLS certificate must have the issuer and subject CN fields equal to each other and equal to the hostname of the RabbitMQ broker node. In my case, inspection of the RabbitMQ log file (in /var/log/rabbitmq) shows that the broker is running on a node called: rabbit#pnp-vm2. The host name is therefore pnp-vm2. In order to check the CN fields of the client-side certificate, I use the following command:
ap#pnp-vm2:openssl x509 -noout -text -in /etc/cert/node.crt | fgrep CN
Issuer: C = CH, ST = CH, L = Location, O = Organization GmbH, CN = pnp-vm2
Subject: C = DE, ST = NodeProvince, L = NodeTown, O = Node GmbH, CN = pnp-vm2
As you can see, both the Issuer CN field and the Subject CN Field are equal to: "pnp-vm2" (which is the hostname of the RabbitMQ broker, see above). I tried using this name for only one of the two CN fields but then the connection to the broker could not be established. In my test environment, it was easy to create a client certificate with identical CN names but, in an operational environment, this may be a lot harder to do. Also, I do not quite understand the reason for this constraint: is it a bug or it is a feature? And does it originate in the particular RabbitMQ library I am using (python's pika) or in the AMQP protocol? These question probably deserve a dedicated post.
(4) The client-side program to set up the connection is:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(cafile="/home/ap/RocheTe/cert/sampleNodeCert/tms.crt")
context.load_cert_chain("/home/ap/RocheTe/cert/sampleNodeCert/node.crt",
"/home/ap/RocheTe/cert/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, 'pnp-vm2')
conn_params = pika.ConnectionParameters(host='a.b.c.d',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials(),
heartbeat=0)
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
input("Press Enter to continue...")
Here, "a.b.c.d" is the IP address of the machine on which the RabbitMQ broker is running.
(5) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
One final word of warning: with this configuration, I was able to establish a secure connection to the RabbitMQ Broker but, for reasons which I still do not understand, it became impossible to start the RabbitMQ Web Management Tool...

LDAP implementation

I want to implementation centralize auth using AWS Simple AD (samba). The client machine is linux based (ubuntu and amazon linux). Ony my ldap, i just creat one user (cn=test) under dc=ldap,dc=test,dc=io.
I am using sssd as the auth client from my linux machine. And here my /etc/sssd/sssd.conf :
[sssd]
config_file_version = 2
services = nss, pam
domains = LDAP
[nss]
[pam]
[domain/LDAP]
id_provider = ldap
auth_provider = ldap
ldap_schema = rfc2307
ldap_uri = ldap://ldap.test.io
ldap_default_bind_dn = dc=ldap,dc=test,dc=io
ldap_default_authtok = password01
ldap_default_authtok_type = password
ldap_search_base = dc=ldap,dc=test,dc=io
ldap_user_search_base = dc=ldap,dc=test,dc=io
ldap_group_search_base = odc=ldap,dc=test,dc=io
ldap_user_object_class = inetOrgPerson
ldap_user_gecos = cn
override_shell = /bin/bash
cache_credentials = true
enumerate = true
But, it looks like not working from the client, i didn't get the ldap user from my client (i execute this getent passwd).
And i got this error:
nss_ldap: reconnecting to LDAP server...
nss_ldap: reconnecting to LDAP server (sleeping 1 seconds)...
nss_ldap: could not search LDAP server - Server is unavailable
No passwd entry for user 'test'
Here is my reference to configure the sssd client enter link description here
Any suggestion for this case ?
Thanks
The error message you are getting is from nss_ldap, not from nss_sss. So I assume in /etc/nsswitch.conf, you configured the ldap module either on its own or before sss. If the user information is to be returned by sssd then use the sss nsswich module.
I would also recommend to not use enumerate=true unless your directory is quite small.
In /etc/nsswitch.conf be sure to have:
passwd: files sss
shadow: files sss
groups: files sss
And of course in the stack of the /etc/pam.d/system-auth-ac and /etc/pam.d/password-auth-ac you have to use the pam_sss.so library.

lsyncd doesn't respect ssh user when deleting files

We have setup lsyncd to sync data between two hosts. The ssh connection is configured to use user tomcat with the matching id_rsa identity file. For some reason a append/create on the remote works fine, but deleting doesn't work. When rsync tries to delete a file, the root user is used to connect to the destination host and not the tomcat user (which is used for create/append).
In the logs (/var/log/lsyncd/lsyncd.log) we see:
Wed Feb 15 13:48:24 2017 Normal: Rsyncing list
/test.txt
Wed Feb 15 13:48:26 2017 Normal: Finished (list): 0
Wed Feb 15 13:48:34 2017 Normal: Deleting list
/myfolder//test.txt
Received disconnect from 10.29.146.78: 2: Too many authentication failures for root
Wed Feb 15 13:48:41 2017 Normal: Retrying (list): 255
We use the below configuration (/etc/lsyncd.conf):
settings{
pidfile = "/var/run/lsyncd.pid",
statusFile = "/var/tmp/lsyncd.status",
logfile = "/var/log/lsyncd/lsyncd.log",
statusInterval = 60,
logfacility = "user",
logident = "lsyncd",
inotifyMode = "CloseWrite",
maxProcesses = 10,
}
sync {
default.rsyncssh,
source = "/myfolder/",
delete = true,
host = "remote-host",
targetdir = "/myfolder/",
excludeFrom = "/etc/lsyncd/lsyncd.exclude",
delay = 5,
rsync = {
binary = "/usr/bin/rsync",
archive = true,
owner = true,
compress = true,
_extra = { "--bwlimit=50000", "--delete-after" },
rsh = "/usr/bin/ssh -l tomcat -i /usr/share/tomcat6/.ssh/id_rsa",
}
}
As a workaround we can use a /root/.ssh/config file with:
Host remote-host
Hostname remote-host
User tomcat
IdentityFile /usr/share/tomcat6/.ssh/id_rsa
Of course we would rather not have to use this since it should work with the lsyncd.conf configuration.
We're using lsyncd version 2.1.4
The following issue on GitHub helped to me solve the same problem:
https://github.com/axkibe/lsyncd/issues/369
What I did was quite simple, I just replaced default.rsyncssh with default.rsync in lysync.conf.lua file
When using rsyncssh, one has to be careful.
The "ssh {}" configuration parameter has its own "binary", "port", "_extra". See documentation for complete list of settings.
It is a little confusing because "rsync {}" also needs to be configured. Yes, both sections need to be done.
The "ssh" section is used for delete and move events. The "rsync" section is used for file transfer.
One might avoid the confusion by using rsync instead of rsyncssh. But, you would lose the bandwidth efficiency that rsyncssh provides when files get moved.

can not log in to ftp since plesk upgrade

Non of my ftp accounts work via ftp since I upgraded to plesk 9.5.4. I get "530 login incorrect." The username and password are correct since I can see them at both at /etc/passwd and /etc/shadow. I have tried changing the information via Domains -> mydomain.com -> Web Hosting Settings -> FTP Login and I still get the error. If I add a user name or password via Web Users, it is added to the password file, but that login does not work either. My root login via SSH works fine. Any suggestions?
Thanks,
-Jonathan
Check your /var/log/messages for errors from proftpd.
Here is my /var/log/messages for simple FTP session:
Jan 16 16:39:15 localhost xinetd[577]: START: ftp pid=9317 from=::ffff:10.50.2.4
Jan 16 16:39:16 localhost proftpd[9317]: 10.52.52.203 (10.50.2.4[10.50.2.4]) - FTP session opened.
Jan 16 16:39:16 localhost proftpd[9317]: 10.52.52.203 (10.50.2.4[10.50.2.4]) - FTP session closed.
Jan 16 16:39:16 localhost xinetd[577]: EXIT: ftp status=0 pid=9317 duration=1(sec)
Jan 16 16:39:16 localhost xinetd[577]: START: ftp pid=9318 from=::ffff:10.50.2.4
Jan 16 16:39:16 localhost proftpd[9318]: 10.52.52.203 (10.50.2.4[10.50.2.4]) - FTP session opened.
Jan 16 16:39:22 localhost proftpd[9318]: 10.52.52.203 (10.50.2.4[10.50.2.4]) - Preparing to chroot to directory '/var/www/vhosts/domain.tld'
Jan 16 16:40:03 localhost xinetd[577]: Exiting...