LDAP Unable to start openldap for windows - ldap

I watched youtube online as reference to install openldap on windows,
I also followed the tutorial on zytrax.com
C:\OpenLDAP>slaptest -f slapd.conf -F slapd.d
5c9eec00 using config directory slapd.d, error 0
config file testing succeeded
there is this part "Conversion to slapd.d is trivial. After modifying the slapd.conf file as required simply create a new directory/folder called slapd.d. Open a command line (dos box for us oldies), navigate to c:\OpenLDAP (or wherever you put your installation) and enter:" in which I don't understand, what do I need to configure in slapd.conf
C:\OpenLDAP>slapd -d 8 -h "ldaps://localhost/ ldap://localhost/"
5c9ef038 OpenLDAP 2.4.42 Standalone LDAP Server (slapd)daemon: bind(2) failed errno=10013 (WSAEACCES)
5c9ef038 daemon: bind(3) failed errno=10013 (WSAEACCES)
5c9ef038 slapd stopped.
5c9ef038 connections_destroy: nothing to destroy.
How do I get my ldapserver to start running ?

I had the same issue and my issue was that the ports were already open by another service. Try specifying other ports when starting the slapd server.
slapd -d 8 -h "ldaps://localhost:6866/ ldap://localhost:3899/"

Related

Robot Framework - SSH library - Editing a file on remote server

I am writing a test case in Robot Framework where in, I have to either copy the file from the local machine (windows) to the remote server (linux) or create a new one at the location.
I have used multiple sudo su - command to switch users to root user to reach the desired host. As a result of this, I am not able to use Put File Keyword from SSH Library to upload the file. I have reached at the desired folder location by executing the commands with Write keyword.
Since there is no option left (thats what i realize with my limited knowledge on Robot Framework), i started creating a new file with vi <filename> command. I have also reached the INSERT mode of the file, BUT i am not able to edit text into the file.
Can someone please suggest me how can i either
Copy the file from local windows machine to remote linux server AFTER multiple SU commands (Switch User)
Create a new text file and enter the content.
Please See : the new file which is being created / copied is a certificate file. Hence i do not wish to write the entire content of the certificate in my test suite file
The entire test case looks something like this
First Jump1
Log Starting the connection to AWS VM
# Connection to VM with Public Key
Connection To VM ${hostname} ${username}
Send Command sudo su -
Send Command su - <ServiceUser1>
# Reached the Detination server
Send Command whoami
Send Command ss -tln | grep 127.0.0.1:40
# Connecting to Particular ZIP
Send Command sudo -u <ServiceUser2> /usr/bin/ssh <ServiceUser2>#localhost -p <port>
Send Command sudo su -
# Check Auth Certificate
Send Command mosquitto_pub -h ${mq_host} -p ${mq_port} -u ${mq_username} -P ${mq_password}
In the step Check Auth Certificate, the certificate is checked to be present or not, if present -> delete the current certificate and create the new one (either create a new file or upload from local) and if it not there create a new certificate
though it might not be ideal, but was able to achieve what i wanted to do with
echo "content" > newFilename
echo "update content" >> newFileName

ssh and sudo: pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.

pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm using ansible to provision my Centos 7 produciton cluster. Unfortunately, execution of below command results with ansible Tiemout and Linux Pluggable Authentication Modules (pam) error conversation failed.
The same ansible command works well, executed against virtual lab mad out of vagrant boxes.
Ansible Command
$ ansible master_server -m yum -a 'name=vim state=installed' -b -K -u lukas -vvvv
123.123.123.123 | FAILED! => {
"msg": "Timeout (7s) waiting for privilege escalation prompt: \u001b[?1h\u001b=\r\r"
}
SSHd Log
# /var/log/secure
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): conversation failed
Aug 26 13:36:19 master_server sudo: pam_unix(sudo:auth): auth could not identify password for [lukas]
I've found the problem. It turned out to be PAM's auth module problem! Let me describe how I got to the solution.
Context:
I set up my machine for debugging - that is I had four terminal windows opened.
1st terminal (local machine): Here, I was executing ansible prduction_server -m yum -a 'name=vim state=installed' -b -K -u username
2nd terminal (production server): Here, I executed journalctl -f (system wide log).
3rd terminal (production server): Here, I executed tail -f /var/log/secure (log for sshd).
4th terminal (production server): Here, I was editing vi /etc/pam.d/sudo file.
Every time, I executed command from 1st terminal I got this errors:
# ansible error - on local machine
Timeout (7s) waiting for privilege escalation prompt error.
# sshd error - on remote machine
pam_unix(sudo:auth): conversation failed
pam_unix(sudo:auth): [username]
I showed my entire setup to my colleague, and he told me that the error had to do something with "PAM". Frankly, It was the first time that I've heard about PAM. So, I had to read this PAM Tutorial.
I figured out, that error relates to auth interface located in /etc/pam.d/sudo module. Diging over the internet, I stambled upon this pam_permit.so module with sufficient controll flag, that fixed my problem!
Solution
Basically, what I added was auth sufficient pam_permit.so line to /etc/pam.d/sudo file. Look at the example below.
$ cat /etc/pam.d/sudo
#%PAM-1.0
# Fixing ssh "auth could not identify password for [username]"
auth sufficient pam_permit.so
# Below is original config
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke
session required pam_limits.so
session include system-auth
Conclusion:
I spent 4 days to arrive to this solution. I stumbled upon over a dozens solutions that did not worked for me, starting from "duplicated sudo password in ansible hosts/config file", "ldap specific configuration" to getting advice from always grumpy system admins!
Note:
Since, I'm not expert in PAM, I'm not aware if this fix affects other aspects of the system, so be cautious over blindly copy pasting this code! However, if you are expert on PAM please share with us alternative solutions or input. Thanks!
Assuming the lukas user is a local account, you should look at how the pam_unix.so module is declared in your system-auth pam file. But more information about the user account and pam configuration is necessary for a specific answer.
While adding auth sufficient pam_permit.so is enough to gain access. Using it in anything but the most insecure test environment would not be recommended. From the pam_permit man page:
pam_permit is a PAM module that always permit access. It does nothing
else.
So adding pam_permit.so as sufficient for authentication in this manner will completely bypass the security for all users.
Found myself in the same situation, tearing my hair out. In my case, hidden toward the end of the sudoers file, there was the line:
%sudo ALL=(ALL:ALL) ALL
This undoes authorizations that come before it. If you're not using the sudo group then this line can safely be deleted.
I had this error since upgrading sudo to version 1.9.4 with pacman. I hadn't noticed that pacman had provided a new sudoers file.
I just needed to merge /etc/sudoers.pacnew.
See here for more details: https://wiki.archlinux.org/index.php/Pacman/Pacnew_and_Pacsave
I know that this doesn't answer the original question (which pertains to a Centos system), but this is the top Google result for the error message, so I thought I'd leave my solution here in case anyone stumbles across this problem coming from an Arch Linux based operating system.
I got the same error when I tried to restart apache2 with sudo service apache2 restart
When logging into root I was able to see the real error lied with the configuration of apache2. Turned out I removed a site's SSL-Certificate files a few months ago but didn't disable the site in apache2. a2dissite did the trick.

Apache script config with loggly

I am trying to configure loggly in apache in my ubuntu machine.
What I have done is
curl -O https://www.loggly.com/install/configure-apache.sh
sudo bash configure-apache.sh -a XXXXXX -u XXXXXX
After entering the last line it's saying
ERROR: Apache logs did not make to Loggly in time. Please check network and firewall settings and retry.
Manual instructions to configure Apache2 is available at https://www.loggly.com/docs/sending-apache-logs/. Rsyslog troubleshooting instructions are available at https://www.loggly.com/docs/troubleshooting-rsyslog/
Any idea why it's showing and how to solve it?
This is likely a network issue or a delay in sending the logs or even an issue with the script. Check out the following link that has the manual instructions. https://www.loggly.com/docs/sending-apache-logs/ that you can follow and use to verify the script created the configuration files correctly.

RabbitMQ 3.3.1 can not login with guest/guest

I have installed the latest version of RabbitMQ on a VPS Debian Linux box. Tried to get login through guest/guest but returned with the message login failed. I did a little research and found that for security reason its prohibited to get login via guest/guest remotely.
I also have tried enabling guest uses on this version to get logged in remotely by creating a rabbitmq.config file manually (because the installation didn't create one) and placing the following entry only
[{rabbit, [{loopback_users, []}]}].
after restart the rabbitmq with the following command.
invoke-rc.d rabbitmq-server stop -- to stop
invoke-rc.d rabbitmq-server start -- to start
It still doesn't logged me in with guest/guest. I also have tried installing RabbitMQ on Windows VPS and tried to get log in via guest/guest through localhost but again i get the same message login failed.
Also provide me a source where I could try installing the old version of RabbitMQ that does support logging remotely via guest/guest.
I had the same Problem..
I installed RabbitMQ and Enabled Web Interface also but still couldn't sign in with any user i newly created, this is because you need to be administrator to access this.
Do not create any config file and mess with it..
This is what i did then,
Add a new/fresh user, say user test and password test:
rabbitmqctl add_user test test
Give administrative access to the new user:
rabbitmqctl set_user_tags test administrator
Set permission to newly created user:
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
That's it, enjoy :)
I tried on Debian the same configuration with the following steps:
Installed RabbitMQ.
Enabled the web-management plug-in (not necessary).
When I tried to login I had the same error:
So I created a rabbitmq.config file (classic configuration file) inside the /etc/rabbitmq directory with the following content (notice the final dot):
[{rabbit, [{loopback_users, []}]}].
Alternatively, one can create instead a rabbitmq.conf file (new configuration file) inside the same directory with the following content:
loopback_users = none
Then I executed the invoke-rc.d rabbitmq-server start command and both the console and the Java client were able to connect using the guest/guest credentials:
So I think you have some other problem if this procedure doesn't work. For example your RabbitMQ might be unable to read the configuration file if for some reason you have changed the RABBITMQ_CONFIG_FILE environment variable.
This is a new features since the version 3.3.0. You can only login using guest/guest on localhost. For logging from other machines or on ip you'll have to create users and assign the permissions. This can be done as follows:
rabbitmqctl add_user test test
rabbitmqctl set_user_tags test administrator
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
Adding the below line in the config file and restarting the server worked for me. Kindly try in your setup.
loopback_users.guest = false
I got this line from the example RabbitMQ config file from Github as linked here.
notice: check your PORT is 15672 ! (version > 3.3 ) if 5672 not works
First of all, check the "choosen answer above":
rabbitmqctl add_user test test
rabbitmqctl set_user_tags test administrator
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
and if still can't make connection work, check if your port is correct!
for me, this command works:
$ rabbitmqadmin -H 10.140.0.2 -P 15672 -u test -p test list vhosts
+------+----------+
| name | messages |
+------+----------+
| / | |
+------+----------+
for the completed ports , check this:
What ports does RabbitMQ use?
to verify your rabbit mq server, check this: Verify version of rabbitmq
p.s.
For me, after I created the "test" user and run set_user_tags, set_permissions , I can't connect to rabbitmq via port 5672. but I can connect via 15672.
However, port 15672 always gives me a "blank response". and my code stop working.
so about 5 minutes later, I switched to 5672, everything worked!
Very wired problem. I have no time to dig deeper. so I wrote it down here for someone meeting the same problems.
for other guys which use Ansible for RabbitMQ provisioning, what I missed for rabbitmq_user module was tags: administrator
here is my working Ansible configuration to recreate "guest" user (for development environment purpose, don't do that in production environment):
- name: Create RabbitMQ user "guest"
become: yes
rabbitmq_user:
user: guest
password: guest
vhost: /
configure_priv: .*
read_priv: .*
write_priv: .*
tags: administrator
force: yes # recreate existing user
state: present
and I also had to setup a file /etc/rabbitmq/rabbitmq.config containing the following:
[{rabbit, [{loopback_users, []}]}].
in order to be able to log using "guest"/"guest" from outside of localhost
#Create rabbitmq.conf file with
rabbitmq.conf
loopback_users = none
Dockerfile:
FROM rabbitmq:3.7-management
#Rabbitmq config
COPY rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
#Install vim (edit file)
RUN ["apt-get", "update"]
RUN ["apt-get", "-y", "install", "vim"]
#Enable plugins rabbitmq
RUN rabbitmq-plugins enable --offline rabbitmq_mqtt rabbitmq_federation_management rabbitmq_stomp
Run:
$ docker build -t my-rabbitmq-image .
$ docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 my-rabbitmq-image
Check that the rabbitmq.conf file has been copied correctly.
$ docker exec -it my_container_id /bin/bash
$ vim /etc/rabbitmq/rabbitmq.conf
I had the same problem. I tried what was suggested by Gas and ran "invoke-rc.d rabbitmq-server start" it didn't start. I tried to reboot the server and the webui worked with the guest user. Maybe after adding the rabbitmq.config file, something else also needed to started.
I used rabbitmq version 3.5.3.
One more thing to note: if you're using AWS instance then you need to open inbound port 15672. (The port for RabbitMQ versions prior to 3.0 is 55672.).
Students and I stared at this problem for an hour. Be sure you've named your files correctly. In the /etc/rabbitmq directory, there are two distinct files. There is an /etc/rabbitmq/rabbitmq.config file which you should edit to get the loopback users as described, but there is another file called rabbitmq-env.conf file. Many folks were using tab completion and just adding "ig", which isn't the right file. Double check!
sometimes you don't need the comma , which is there in the configuration file by default , if nothing else is configured below rabbit tag , while starting broker
we will get a crash
like
{loopback_users, []} , I spend many times hours forgetting this and later removing the comma , it is applicable for all other configurations including SSL
Try restart your rabbitmq and login again, for me work.
For a slightly different use, but might be useful for anyone dealing with accessing the API for monitoring purposes:
I can confirm the answer given by #Oliboy50 works well, however make sure you enable it for each vhost you want the user to be able to monitor, such as:
permissions:
- vhost: "{{item.name}}"
configure_priv: .*
write_priv: .*
read_priv: .*
state: present
tags: management
with_items: "{{user_system_users}}"
With this loop I was able to get past the "401 Unauthorized" error when using the API for any vhost.
By default, the guest user is prohibited from connecting from remote hosts; it can only connect over a loopback interface (i.e. localhost). This applies to connections regardless of the protocol. Any other users will not (by default) be restricted in this way.
It is possible to allow the guest user to connect from a remote host
by setting the loopback_users configuration to none
# DANGER ZONE!
#
# allowing remote connections for default user is highly discouraged
# as it dramatically decreases the security of the system. Delete the user
# instead and create a new one with generated secure credentials.
loopback_users = none
Or, in the classic config file format (rabbitmq.config):
%% DANGER ZONE!
%%
%% Allowing remote connections for default user is highly discouraged
%% as it dramatically decreases the security of the system. Delete the user
%% instead and create a new one with generated secure credentials.
[{rabbit, [{loopback_users, []}]}].
See at "guest" user can only connect from localhost
TIP: It is advisable to delete the guest user or at least change its password to reasonably secure generated value that won't be known to the public.
If you will check the log file under info report you will get this.
`config file(s) : /etc/rabbitmq/rabbitmq.config (not found)`.
Change the config file permission using below command then login using guest , it will work
sudo chmod 777 /etc/rabbitmq/rabbitmq.config