I'm trying to configure OpenLDAP client on SLED10 host and faced with some problems. I've specified URI field in config like URI ldap://172.16.8.103:7323 but ldapsearch bails that it Can't contact LDAP server. With ldapsearch -H ldap://172.168.8.103:7323 it works fine. Setting
host 172.16.8.103
port 7323
instead of URI returns the same error message. Moreover, tcpdump tells that no LDAP requests are performed at all in this case. Other settings in config like BASE work fine. What can cause such problem and how to solve it?
Clearly ldapsearch isn't finding the ldap.conf file.
To know, what configuration file does ldapsearch consults, you may use one of this commands:
1) strings $(ldd $(readlink -e $(which ldapsearch)) | awk -F'(=>|[[:space:]]\\()' '$2 ~ /ldap/ {print $2}') | fgrep .conf
2) strace ldapsearch -x 2>&1 | fgrep .conf | grep -v '\(resolv\|nsswitch\|host\).conf'
In some unlikely cases you may need to install binutils package (by default it is installed on most distributives) or strace package first to run appropriate commands.
And yes you can use ".ldaprc" in your home directory and forget about searching of theoretically unpredictable system-wide path to ldap.conf at all.
I'm not sure that the version of openldap in SLES 10 supports this, but in SLES 11 you may specify the config file thru a env variable.
# LDAPCONF=/etc/ldap.conf ldapsearch
Related
I am looking for a way using ssh-keyscan to possibly define a port within the keyscan file specified with the -f flag instead of having to specify it on the command line.
The following is how I currently do it:
/usr/bin/ssh-keyscan -f /home/ansible/.ssh/test-scanner-keyscan_hosts -p 16005 >> /home/ansible/.ssh/known_hosts;
Contents of the keyscan file:
mainserver1.org,shortname1,shortname2
mainserver2.org,shortname1,shortname2
The issue is, each "mainserver" has a unique ssh port that is different from the others. While this will cause mainserver1 to work, since it's port is 16005, mainserver2 will fail because it's port is 17005. The only way around it currently is to try to do each ssh-keyscan command separately and specifying each different port such that it works.
Instead, I want to be able to specify them within the file, and/or utilize a method that allows for a scanning of a list allowing for unique ports. The issue is, there doesn't seem to be any way to do that.
I tried the following within the keyscan file, and it does not work:
mainserver1.org:16005,shortname1,shortname2
mainserver2.org:17005,shortname1,shortname2
Is there any way to make this work, or any way other than ssh-keyscan, or some other way within ansible to make this function like I hope it does? Otherwise, I have to do an ssh-keyscan task for EVERY server because the ssh ports are all different.
Thanks in advance!
You're actually welcome to use that format, and then use it to drive the actual implementation since ssh-keyscan -f accepts "-" to read from stdin; thus:
scan_em() {
local fn
local port
fn="$1"
for port in $(grep -Eo ':[0-9]+' $fn | sed s/:// | sort | uniq); do
sed -ne "s/:${port}//p" $fn | ssh-keyscan -f - -p $port
done
}
scan_em /home/ansible/.ssh/test-scanner-keyscan_hosts >> /home/ansible/.ssh/known_hosts
I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.
Imagine we have dynamic number of hosts of pattern
test-1.mydomain.com
test-2.mydomain.com
test-3.mydomain.com
...
test-n.mydomain.com
I'd like to ssh on each of those machines by not using full name
ex. ssh test-7.mydomain.com
but simply by doing
ssh test-7
Is there a way to use ssh config to do pattern like aliases like this??
Yes!
Yes, there is a way to do this directly in your ssh configuration.
host allows PATTERNS
hostname allows TOKENS
How:
Add this snippet to your ssh configuration ~/.ssh/config:
# all test-* hosts should match to test-*.mydomain.example.com
host test-*
hostname %h.mydomain.example.com
Verify:
You can verify this with the -G flag to ssh. If your ssh is older and does not support -G you can try parsing verbose ssh output.
# if your ssh supports -G
% ssh test-17 -G | grep hostname
hostname test-17.mydomain.example.com
# if your ssh does not support -G
% ssh -v -v test-n blarg >/dev/null 2>&1 | grep resolv
debug2: resolving "test-n.mydomain.example.com" port 22
ssh: Could not resolve hostname test-n.mydomain.example.com: Name or service not known
Notes:
Ssh uses the first host line that matches. It is good practice to add your PATTERN host lines at the bottom of your configuration file.
If your test-n name patterns contain only a single character suffix, then you can use a ? pattern to make a less greedy match. test-? will match test-1 test-a test-z but not test-10
You can create a ssh config file and pre setting your servers.
See this tutorial, i hope it helps you!
ssh config
You can also create a function in your bash file for ssh access.
Like this:
function ssh_test () {
[[ $1 =~ ^('test-1'|'test-2'|'test-3')$ ]] || { echo 'Not a valid value !!' && return ;}
domain=$1.mydomain.com
ssh my_user#"$domain"
}
If it is an option for you, you could add a search domain in the resolv.conf file (I'm assuming you are on Linux).
You would need to add a line like this:
search mydomain.com
which will have SSH (and most other apps) look for test-n, then test-n.mydomain.com.
If you are not managing the resolv.conf file yourself (if you use systemd-networkd or NetworkManager for example), you will have to ajust the search domains in their configuration files).
I'm using flask with apache(mod_wsgi).
When I use ssh module with external command subprocess.call("ssh ......",shell=True)
(My Python Flask code : Not wrong)
ssh = "sshpass -p \""+password+"\" ssh -p 6001 "+username+"#"+servername+" \"mkdir ~/MY_SERVER\""
subprocess.call(ssh, shell=True)
I got this error on Apache error_log : Failed to get a pseudo terminal: Permission denied
How can I fix this?
I've had this problem under RHEL 7. It's due to SELinux blocking apache user to access pty. To solve:
Disable or set SELinux as permissive (check your security needs): edit /etc/selinux/config and reboot.
Allow apache to control its directory for storing SSH keys:
sudo -u apache
chown apache /etc/share/httpd
ssh to desired host, accept key.
I think apache's login shell is "/sbin/nologin".
If you want to allow apache to use shell command, modify /etc/passwd and change the login shell to another shell like "/bin/bash".
However, this method is vulnerable to security. Many python ssh modules are available in internet. Use one of them.
What you are doing seems frightfully insecure. If you cannot use a Python library for your SSH connections, then you should at least plug the hole that is shell=True. There is very little here which is done by the shell anyway; doing it in Python affords you more control, and removes a big number of moving parts.
subprocess.call(['/usr/bin/sshpass', '-p', password,
'/usr/bin/ssh', '-T', '-p', '6001', '{0}#{1}'.format(username, servername),
'mkdir ~/MY_SERVER'])
If you cannot hard-code the paths to sshpass and ssh, you should at least make sure you have a limited, controlled PATH variable in your environment before doing any of this.
The fix for Failed to get a pseudo-terminal is usually to add a -T flag to the ssh command line. I did that above. If your real code actually requires a tty (which mkdir obviously does not), perhaps experiment with -t instead, and/or redirecting standard input and standard output.
How can I clear apache cache in xammp?
I tried the 'htcacheclean -r' command, but it's always generated error.
If I know well the apache can't cache the files/ scripts, but a system administrator said this: 'The apache casheing the site, so clear the apache(!) cache.'.
Take a look at this:
Use mod_cache at http://httpd.apache.org/docs/2.0/mod/mod_cache.html
CacheDisable /local_files
Description: Disable caching of specified URLs Syntax: CacheDisable url-string Context: server config, virtual host
Try this if others not working:
htcacheclean -p C:\xampp\htdocs\yourproject -rv -L 1000M
In this way, you specify the -p path clearly, not to expect xampp to find that path.
The -r = Clean thoroughly. This assumes that the Apache web server is
not running. This option is mutually exclusive with the -d
option and implies -t.
The -v = Be verbose and print statistics. This option is mutually
exclusive with the -d option.
The -L 1000M = Specify LIMIT as the total disk cache inode limit.(in Megabytes)